SOLVED PXE chaining fails on SSD
I have several shipments of new computers with an SSD that will not PXE boot to the local drive. It just hangs there.
I am running fog 8064 on Red Hat 6 Linux and I am trying to boot a Dell OptiPlex 5040 computer with Windows 10 to a Class 10 SSD (I have another shipment of computers that is failing on a Class 20 SSD). If I turn off the network boot it boots okay to the SSD.
I tried to compile the iPXE programs myself and they did not fail, but gave a bootmgr error, so I probably did not do it correctly. When was the last time that they were updated?
@tmerrick I know this is asking a bit much, but if you swap one of the class 10 or class 20 with a samsung evo’s do they exit correctly with San boot? That would help us identify that it is clearly something different with these class 10+ drives or the motherboard. If you don’t have time to debug that is ok. It would just help the next guy with the same issue.
I’m going to mark this thread as solved either way. Thanks again for your feedback.
Yes, I have a working solution.
It is failing on both Toshiba (Class 10) and Samsung (Class 20) SSDs. I am not sure what the difference from a plan old Samsung EVO and these are, but the EVO sanboots fine.
I usually use PXE boot as they are student computer labs that I multicast 2-3 times a year.
@tmerrick So it sounds like you have your answer?
One last question. Can you identify this is specifically a HDD/SSD drive issue or is it model specific? Typically this issue only appears on certain model devices and not on storage medium types.
One last comment unless you need unattended imaging you don’t need to pxe boot each time. This does create a single point of failure for your network. i.e. no PXE server to client boot. In my case we only image computers when a tech is standing in front of the device so we manually just press F12 to and select network boot to connect to fog for imaging.
Yes, it was SANBOOT before. As I recall, the exit type failed also, but the grub first hard drive does work. Yes, they are traditional sata attached ssds.
@tmerrick OK then, unless you changed the global bios exit mode from default, your global exit is probably sanboot. So as a test for this system go into the host configuration for that device and set the ‘Host Bios Exit Type’ to exit or grub first hard drive. You may have to experiment with the exit type to find the right one for this hardware.
Just for clarity these ssds are like the samsug evo or a traditional sata attached ssd, and not the m.2 or other configuration right?
They are bios with global exit mode. Yes, I have about 200 computers that work fine, but these are the first with SSDs. The host registration/imaging part works fine. It is just the chain to the local SSD that is failing.
I guess I have 2 questions to start.
- Are these systems uefi or bois
- What is your FOG exit mode (either global or per host)
- (just thought of another) Do other models exit to the hard drive OK?