Dell 7040 NVMe SSD Boot Issue
-
@chrisdecker You’ll also find this article helpful, as it’s silly to flip/flop DHCP settings manually all day long every day: https://wiki.fogproject.org/wiki/index.php?title=BIOS_and_UEFI_Co-Existence
-
@george1421 Thank you! That worked. Now FOG is not recognizing the hard drive. Any ideas?![0_1485350206829_20170125_081424.jpg](Uploading 100%)
-
@chrisdecker what version of fog are you running? FOG 1.3.x surely detects nvme drives without issue.
-
Screenshot
-
Running FOG 1.3.4-RC-2 with Kernel bzImage 4.9.4
-
@chrisdecker Welp, it sure does appear that FOG can’t see the disk.
Will you do a debug capture or deploy this time? When you schedule the capture or deploy be sure to tick the
debug
checkbox. Then pxe boot the target computer. After a few screen shots of commands it should drop you to a linux command prompt on the target computer. Then key inlsblk
and post the results here. -
I might also suggest seeing if the NVMe SSD Disk is setup for Raid or AHCI/SATA. From what I can see, it’s probably being presented in RAID right now.
-
@Tom-Elliott @george1421 RAID was on.
Switched to AHCI and I can now register the computer.
-
I have successfully deployed an image to the Optiplex 7040 with the same SSD as yours using UEFI (Secure Boot Disabled).
FOG Information:
Running Version 1.3.1-RC-1
SVN Revision: 6052
Kernel Version: bzImage Version: 4.9.0Host EFI Exit Type: Refined_EFI
PXE File: ipxe7156.efiImage: Windows 10
-
@chrisdecker said in Dell 7040 NVMe SSD Boot Issue:
@Tom-Elliott @george1421 RAID was on.
Switched to AHCI and I can now register the computer.
It would still be interesting to know what lsblk says with raid mode on (Dell default).
-
@jburleson said in Dell 7040 NVMe SSD Boot Issue:
Host EFI Exit Type: Refined_EFI
PXE File: ipxe7156.efiI find this interesting. Did ipxe7156.efi work for raid mode where ipxe.efi did not?
-
@george1421 I’m using ipxe.efi. Haven’t tried anything else at this point.
-
@chrisdecker I’m going to solve the thread as we know Changing the HDD presentation type from RAID to AHCI will allow you to use the system.
I agree with @george1421 however and would like to see what
lsblk
sees when the disk is in RAID mode.That said, I suspect it doesn’t find anything because the RAID utilities aren’t being called to even try to scan anything. That or the way the RAID is presented to the FOS System isn’t even recognized (could be driver based I suppose).
-
@Tom-Elliott IMO: The concern I have is that raid-on is the default for almost all Dell systems uefi or bios. So for every 7040 in uefi mode, the OP or IT tech will need to change the disk support method. This can be automated with Dell’s CCTK its just a pain and will continue to cause FOG support calls.
I’ll grab a 7040 from our test lab and see if I can find a consistent answer.
-
I modify the BIOS when the computers come in. One of the settings I change is to switch the SATA operation to AHCI.
I switched from ipxe.efi since the Surface Pro 4 would not boot from it.
ipxe7156.efi does not work for RAID mode either (just tested it).
After my next appointment I will run debug and see if I can get you some additional information on it.
-
Here is the output of lsblk.
-
@jburleson Well that sure is a WFT kind of picture. It DOES tell us a bit more of what we need. Why so many partitions is interesting.
I was just about to grab a 7040 and do the same. I’ll still do that and give me something to play with over the lunch hour.
-
Not sure if this will help any.
mdadm -D /dev/md0
shows
Raid Level 0
Total Devices 0
State Inactivecat /proc/mdstat
shows
Personalities: [linear] [raio0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] [faulty]
unused devices: <none>] -
@jburleson Well yes and no.
While its using the raid controller its not really a raid setup.
What I interesting in the picture is the partition name AND the device major and minor numbers. I don’t think device 43 is currently allowed.
-
@george1421 On my initial test
- Grab a 7040 from inventory and reset bios back to factory and change mode to uefi (raid-on by default). Note system was a functional system in bios mode with a mbr image.
- Schedule a debug deploy task
- lsblk shows no hard drive, period.
[Wed Jan 25 root@fogclient ~]# lsblk [Wed Jan 25 root@fogclient ~]#
and onboard hardware
[Wed Jan 25 root@fogclient ~]# lspci -nn 00:00.0 Host bridge [0600]: Intel Corporation Sky Lake Host Bridge/DRAM Registers [8086:191f] (rev 07) 00:01.0 PCI bridge [0604]: Intel Corporation Sky Lake PCIe Controller (x16) [8086:1901] (rev 07) 00:02.0 VGA compatible controller [0300]: Intel Corporation Sky Lake Integrated Graphics [8086:1912] (rev 06) 00:14.0 USB controller [0c03]: Intel Corporation Sunrise Point-H USB 3.0 xHCI Controller [8086:a12f] (rev 31) 00:14.2 Signal processing controller [1180]: Intel Corporation Sunrise Point-H Thermal subsystem [8086:a131] (rev 31) 00:16.0 Communication controller [0780]: Intel Corporation Sunrise Point-H CSME HECI #1 [8086:a13a] (rev 31) 00:16.3 Serial controller [0700]: Intel Corporation Sunrise Point-H KT Redirection [8086:a13d] (rev 31) 00:17.0 RAID bus controller [0104]: Intel Corporation SATA Controller [RAID mode] [8086:2822] (rev 31) 00:1f.0 ISA bridge [0601]: Intel Corporation Sunrise Point-H LPC Controller [8086:a146] (rev 31) 00:1f.2 Memory controller [0580]: Intel Corporation Sunrise Point-H PMC [8086:a121] (rev 31) 00:1f.3 Audio device [0403]: Intel Corporation Sunrise Point-H HD Audio [8086:a170] (rev 31) 00:1f.4 SMBus [0c05]: Intel Corporation Sunrise Point-H SMBus [8086:a123] (rev 31) 00:1f.6 Ethernet controller [0200]: Intel Corporation Ethernet Connection (2) I219-LM [8086:15b7] (rev 31) [Wed Jan 25 root@fogclient ~]#
Now kernel drivers associated with the hardware
[Wed Jan 25 root@fogclient ~]# lspci -k 00:00.0 Host bridge: Intel Corporation Sky Lake Host Bridge/DRAM Registers (rev 07) Subsystem: Dell Device 06b9 Kernel driver in use: skl_uncore lspci: Unable to load libkmod resources: error -12 00:01.0 PCI bridge: Intel Corporation Sky Lake PCIe Controller (x16) (rev 07) Kernel driver in use: pcieport 00:02.0 VGA compatible controller: Intel Corporation Sky Lake Integrated Graphics (rev 06) Subsystem: Dell Device 06b9 00:14.0 USB controller: Intel Corporation Sunrise Point-H USB 3.0 xHCI Controller (rev 31) Subsystem: Dell Device 06b9 Kernel driver in use: xhci_hcd 00:14.2 Signal processing controller: Intel Corporation Sunrise Point-H Thermal subsystem (rev 31) Subsystem: Dell Device 06b9 00:16.0 Communication controller: Intel Corporation Sunrise Point-H CSME HECI #1 (rev 31) Subsystem: Dell Device 06b9 00:16.3 Serial controller: Intel Corporation Sunrise Point-H KT Redirection (rev 31) Subsystem: Dell Device 06b9 Kernel driver in use: serial 00:17.0 RAID bus controller: Intel Corporation SATA Controller [RAID mode] (rev 31) Subsystem: Dell Device 06b9 Kernel driver in use: ahci 00:1f.0 ISA bridge: Intel Corporation Sunrise Point-H LPC Controller (rev 31) Subsystem: Dell Device 06b9 00:1f.2 Memory controller: Intel Corporation Sunrise Point-H PMC (rev 31) Subsystem: Dell Device 06b9 00:1f.3 Audio device: Intel Corporation Sunrise Point-H HD Audio (rev 31) Subsystem: Dell Device 06b9 00:1f.4 SMBus: Intel Corporation Sunrise Point-H SMBus (rev 31) Subsystem: Dell Device 06b9 Kernel driver in use: i801_smbus 00:1f.6 Ethernet controller: Intel Corporation Ethernet Connection (2) I219-LM (rev 31) Subsystem: Dell Device 06b9 Kernel driver in use: e1000e
This tells me the linux kernel supports the raid controller. So the hardware IS supported by linux. Something else must be not happy.
00:17.0 RAID bus controller: Intel Corporation SATA Controller [RAID mode] (rev 31) Subsystem: Dell Device 06b9 >> Kernel driver in use: ahci
Speculation: If uefi / gpt disk is not found in system then no disk is displayed.