Dell 7040 NVMe SSD Boot Issue
-
@Tom-Elliott Thank you. Please forgive my ignorance. I’m assuming I change that somewhere in the TFTP Server Settings, but which specific line do I change?
-
@chrisdecker You need to make the modification on your dhcp server. You need to change the dhcp option 67 {boot-file} to ipxe.efi
-
@chrisdecker You’ll also find this article helpful, as it’s silly to flip/flop DHCP settings manually all day long every day: https://wiki.fogproject.org/wiki/index.php?title=BIOS_and_UEFI_Co-Existence
-
@george1421 Thank you! That worked. Now FOG is not recognizing the hard drive. Any ideas?![0_1485350206829_20170125_081424.jpg](Uploading 100%)
-
@chrisdecker what version of fog are you running? FOG 1.3.x surely detects nvme drives without issue.
-
Screenshot
-
Running FOG 1.3.4-RC-2 with Kernel bzImage 4.9.4
-
@chrisdecker Welp, it sure does appear that FOG can’t see the disk.
Will you do a debug capture or deploy this time? When you schedule the capture or deploy be sure to tick the
debug
checkbox. Then pxe boot the target computer. After a few screen shots of commands it should drop you to a linux command prompt on the target computer. Then key inlsblk
and post the results here. -
I might also suggest seeing if the NVMe SSD Disk is setup for Raid or AHCI/SATA. From what I can see, it’s probably being presented in RAID right now.
-
@Tom-Elliott @george1421 RAID was on.
Switched to AHCI and I can now register the computer.
-
I have successfully deployed an image to the Optiplex 7040 with the same SSD as yours using UEFI (Secure Boot Disabled).
FOG Information:
Running Version 1.3.1-RC-1
SVN Revision: 6052
Kernel Version: bzImage Version: 4.9.0Host EFI Exit Type: Refined_EFI
PXE File: ipxe7156.efiImage: Windows 10
-
@chrisdecker said in Dell 7040 NVMe SSD Boot Issue:
@Tom-Elliott @george1421 RAID was on.
Switched to AHCI and I can now register the computer.
It would still be interesting to know what lsblk says with raid mode on (Dell default).
-
@jburleson said in Dell 7040 NVMe SSD Boot Issue:
Host EFI Exit Type: Refined_EFI
PXE File: ipxe7156.efiI find this interesting. Did ipxe7156.efi work for raid mode where ipxe.efi did not?
-
@george1421 I’m using ipxe.efi. Haven’t tried anything else at this point.
-
@chrisdecker I’m going to solve the thread as we know Changing the HDD presentation type from RAID to AHCI will allow you to use the system.
I agree with @george1421 however and would like to see what
lsblk
sees when the disk is in RAID mode.That said, I suspect it doesn’t find anything because the RAID utilities aren’t being called to even try to scan anything. That or the way the RAID is presented to the FOS System isn’t even recognized (could be driver based I suppose).
-
@Tom-Elliott IMO: The concern I have is that raid-on is the default for almost all Dell systems uefi or bios. So for every 7040 in uefi mode, the OP or IT tech will need to change the disk support method. This can be automated with Dell’s CCTK its just a pain and will continue to cause FOG support calls.
I’ll grab a 7040 from our test lab and see if I can find a consistent answer.
-
I modify the BIOS when the computers come in. One of the settings I change is to switch the SATA operation to AHCI.
I switched from ipxe.efi since the Surface Pro 4 would not boot from it.
ipxe7156.efi does not work for RAID mode either (just tested it).
After my next appointment I will run debug and see if I can get you some additional information on it.
-
Here is the output of lsblk.
-
@jburleson Well that sure is a WFT kind of picture. It DOES tell us a bit more of what we need. Why so many partitions is interesting.
I was just about to grab a 7040 and do the same. I’ll still do that and give me something to play with over the lunch hour.
-
Not sure if this will help any.
mdadm -D /dev/md0
shows
Raid Level 0
Total Devices 0
State Inactivecat /proc/mdstat
shows
Personalities: [linear] [raio0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] [faulty]
unused devices: <none>]