HP Z640 - NVME PCI-E Drive
-
Hi Friends,
We just got these new Z640 workstations and they come with these NVME 256GB SSDs plugged into a pci-e slot.
They show up in the bios, and when I boot to fog and check compatibility and partition information it says it’s compatible and all the partition info pops up no problem. One possible issue is that it assigns it to something like /dev/nvme instead of the standard /dev/sdaBut then when I try to image or even just do a hardware inventory, FOG gives me a “HDD not found on system” error and then reboots after 1 minute.
Is this something that just isn’t supported? Do I need to enable something in a custom kernel? What am I missing here?
Thanks,
-JJP.S.
Fog info
Version svn 5676
bzImage Version: 4.3.0
bzImage32 Version: 4.3.0 -
Interesting… the first thought is to get a live boot linux distro and see what the real device is called. I would still think it would be /dev/sda something since its still … maybe not using the sata interface (just reread your post).
I was also thinking the trunk build had a debug mode where you could boot to a command prompt to inspect the devices. I know the live boot will work.
[Edit] Ok, if you boot into the fog menu then check system compatibility. Once in system compatibility select partition information. If that shows up blank then we will have to go the live linux route [/Edit]
-
It was getting a different device name /dev/nvme0n1 to be exact
I was able to get it to run hardware inventory,
just had to manually add it to the hosts in the fog web console and set the primary hard disk to that.
I found that information with the fog compatibility menu option under partition information.
It worked for running the hardware inventory but it is not working for imaging the computer unfortunately. -
@Arrowhead-IT I was just editing my previous post about using compatibility menu to get the info.
Now this is just me guessing out loud here (since I haven’t had this exact issue just yet). But if you look in the FOG GUI, and the host management details for this specific computer. Note there is a field called “Host Primary Disk”, it would be interesting to know if you entered /dev/nvme0n1 in that field, would it blow up (Err… work as intended)?
-
@george1421
Yes that’s how I got the hardware inventory to run without error.
But that doesn’t work for imaging. It acts likes it’s going to work, filling partitions and what not, then it just says database updated and reboots like it was finished instead of launching into partclone. -
ok then, I think we’ll need one of the @Developers to look into the code to see if it is honoring the “Host Primary Disk” field when launching the partclone.
-
I am also trying the linux live cd idea to see if it gets anything different and maybe if there’s a way to change it to something more standard
-
Wasn’t able to boot to a live cd just yet but I tried some other stuff.
I tried changing the primary hard disk to /dev/nvme0
because I figure that may be the new /dev/sda
When I tried to image the computer I got a new error“Erasing Current MBR/GPT Tables… The specified path is a character device!
Warning! GPT Main header not overwritten! Error is 22
Warning! MBR not overwritten! Error is 29
Corrupted partition table was erased. Everything should be fine now”It does still get through the hardware inventory, but in either case it doesn’t pull and hard disk information.
I’m going to try /dev/nvme next to see if that works.I also tried changing the bios sata disk mode from RAID to AHCI, but I don’t know if that effects anything on an nvme drive since it’s plugged into pci-express.
Thanks for all the help thus far.
-JJ
-
I think I would still go down the live boot path. Just get ubuntu 15 desktop iso and burn it to CD. Then boot from the CD, there is an option to try it first (or something like that). I would be interested to know what a production kernel reports. But /dev/nvme0 would be a sane name for that hard drive.
-
I think it’s related to another thread where it’s not seeing the major number properly.
-
@Tom-Elliott Is that info posted somewhere for historical/debugging reference in a non-volatile location?
-
-
I’m currently at an appointment so it can’t do anything to fix this right now I just wanted to ensure you all know I’m aware of the issue for right now.
-
I looked through the other post and gave debug mode a try,
I ran lsblk and got the same disk info except it didn’t have any mount information.
But is said nvme0n1 and partition nvme0n1p1 -
Did a debug deploy
I found that fog saw the partition name as /dev/nvme0n11 instead of what was listed in gdisk -l /dev/nvme0n1p1
tried to change the name to no avail. -
@Arrowhead-IT Hey, just wanted to let you know this is all great information. The more details the devs have the better solution they can come up with. Its impossible to have every bit of hardware in the test lab that exists in the wild. So it is key to get support from the user community to help build a better mousetrap.
-
@george1421 That’s what I figured, it seems an odd problem so I’m going to give all the info I can to help find the problem. And plus, these nvme drives think that they are going to become the norm and phase out sata, sas, and scsi, so I’ll do whatever I can to help get FOG working with them.
-
@Arrowhead-IT Thanks for providing information about those devices and helping us. Can you please run another debug session and post the full output of
lsblk -pno KNAME,MAJ:MIN -x KNAME
? We need the exact output (especially dev names - which you already provided - AND the IDs. Just take a picture and post it here if you don’t want to bother about typing it all. -
@Sebastian-Roth on it
-
@Sebastian-Roth Here ya go
I think that one possibility of the problem lies in the variable dump from fog.
It lists the partition as /dev/nvme0n11 but fdisk -l or gdisk -l lists it as /dev/nvme0n1p1