m.2 PCIe SSD not recognised in FOG
-
Is it an AHCI or NVMe SSD?
-
@Toby777 Hi, Fog version ?
-
@Toby777 Please follow this: https://forums.fogproject.org/topic/6315/hp-z640-nvme-pci-e-drive
If you could provide the same information this would help us improve FOG. Can you please run a debug session and post the full output of
lsblk -pno KNAME,MAJ:MIN -x KNAME
? We need the exact output (especially dev names AND the IDs. Just take a picture and post it here. -
Thanks for the response. I’m running FOG 5602.
Not quite sure whether it is AHCI or NVMe. From searching on google it may possibly be NVMe as it is the top range model.
I will try your suggestions tomorrow and post the output here.
-
@Sebastian-Roth said:
lsblk -pno KNAME,MAJ:MIN -x KNAME
I tried excuting the above command but I got…
[root@fog bin]# lsblk -pno KNAME,MAJ:MIN -x KNAME lsblk: invalid option -- 'p' Usage: lsblk [options] [<device> ...] Options: -a, --all print all devices -b, --bytes print SIZE in bytes rather than in human readable format -d, --nodeps don't print slaves or holders -D, --discard print discard capabilities -e, --exclude <list> exclude devices by major number (default: RAM disks) -I, --include <list> show only devices with specified major numbers -f, --fs output info about filesystems -h, --help usage information (this) -i, --ascii use ascii characters only -m, --perms output info about permissions -l, --list use list format ouput -n, --noheadings don't print headings -o, --output <list> output columns -P, --pairs use key="value" output format -r, --raw use raw output format -s, --inverse inverse dependencies -t, --topology output info about topology -V, --version output version information and exit
Running CentOS 6.7
-
@Toby777 As mentioned below this has to be on the DELL XPS client running a debug session (Host -> Basic Tasks -> Debug).
-
My apologies…
Here’s the output…
-
This is like my problem, except I’m trying to download not upload.
What happens if you set the “Host Primary Disk” for that client in the fog web gui to “/dev/nvme0n1” ?
That should get it past the cannot find HDD on system, but for me it wasn’t entering partclone for a download, who knows maybe it would work for upload.Thanks,
-JJ -
@Arrowhead-IT Thanks for the suggestion! I modified the Host Primary Disk value as you suggested and I then got the error saying “No resizable partitions found”.
So I then went to the Image properties and changed the Image Type from Single Disk - Resizable to Multiple Partition Image - Single Disk (Resizable), but it just hung after…
* Using Hard Disk: /dev/nvme0n1
I then changed it to Raw Image and it is now currently uploading.
-
@Toby777 Awesome! That’s good to know. Hopefully they’ll be able to fix it so that nvme works without using raw but the fact that it works at all is awesome! How fast is it uploading? RAW is typically much slower but those pci based drives are supposed to be crazy fast, just curious how fast.
-
@Arrowhead-IT The notebook and FOG server are on a Gigabit connections and at the moment, the upload rate is displaying 5.14GB/min
What’s the implications of using RAW when restoring to drives larger than what its currently being imaged? Will it just be a matter of manually expanding the partition to take up the remainder of the space?
-
@Toby777 I think it might still work on larger drives and you can manually expand the partition. That’s pretty good for RAW from what I’ve seen. Trouble with RAW is that it does every sector of the drive no matter how much space is taken up. Before multiple and extended linux partitions became supported I only used it for very specialized images for computers that were always the same size hard drive.
-
@Arrowhead-IT ok cool… well the image on the server came to 20GB. That’s from a 256GB SSD and the Image Compression setting set to 6. Still not too bad actually considering a captured Windows 10 image from the previous model XPS 13 with a 256GB SATA disk using Single Disk - Resizable is about 17GB.
Now I guess the big test is to restore the Image on to the same machine and see if it boots.
-
While this is probably injecting noise into this thread. We will typically create our reference image on a VM with 1 vCPU and 40GB hard drive. With fog we will use a single disk non resizeable (only because 1.2.0 didn’t work to good with resizeable disks in our environment.) but anyway, in the setupcomplete.cmd file we would run a script to expand the logical disk to the size of the physical disk. It worked well. In this setup we capture the image without the drivers for the final target computer so the image comes in at about 5GB on disk for a thin Win 7 image and about 15GB for a fat win7 image with office and other apps. Then just after the images are laid down on the target computer we use the fog post install scripts to inject the right driver pack from the fog server to the target computer. This saves us about 15GB in space on the target computer that we don’t have to upload (once) and download for each OSD. Also as new hardware is released we just need to update the drivers on the fog server, there is no reason to recapture the image.
-
Here we go. I found some time to look into this more closely!
Download init.xz/init_32.xz files to test https://drive.google.com/folderview?id=0B-bOeHjoUmyMazJLZDhGaEl5VTQ&usp=sharing and put into place in /var/www/fog/service/ipxe/ or /var/www/html/fog/service/ipxe/ (probably a good idea to rename the original files instead of just overwriting them!)Please test and report back if capturing/deploy is working (Image type: Multiple Partition - Single Disk, Host Primary Disk: /dev/nvme0n1).
-
@george1421 said:
Also as new hardware is released we just need to update the drivers on the fog server, there is no reason to recapture the image.
That’s pretty slick. But it requires a lot of knowledge - and I’ll admit you seem to have a lot more skills than I do, likely from your experience.
For me - a 6TB drive is awful cheap, and I can update an existing image before lunch. We have 22 images, one or two for each model. Keeping an image per model is simple to me, and deployment is more simple for both myself and co-workers. We can afford the drive-space and we have the time to make images.
-
@Wayne-Workman said:
For me - a 6TB drive is awful cheap, and I can update an existing image before lunch. We have 22 images, one or two for each model. Keeping an image per model is simple to me, and deployment is more simple for both myself and co-workers. We can afford the drive-space and we have the time to make images.
See the key/trick to this is that there is only one or two images for all systems. I don’t have to keep track of what the target computer is, because there is only one image. In our case we release a new golden image every quarter with all of the latest windows and application updates. If you have 22 images that would be almost impossible to do. In my situation I can tell you that a Dell 790 or 9020 have the same image on it as a e7404 or e7550 (sans the model specific driver). If there is an issue impacting a 790 I’m almost assured that it will impact all models, so I fix it one, I fix it for all (in theory).
-
@Sebastian-Roth Thanks for that! I will download and test and let you know how it goes.
-
I’ve set the image to Single Disk - Resizable and removed the custom entries in the Host’s Primary Disk field.
It all looks good when recognizing the disk… however it appears to be stuck at the following…
I’ve tried repairing the MBR just in case but still seems to be stuck at this point when attempting to capture an image.
-
@Toby777 Thanks for testing and reporting. Have you tried non-resizable (Multiple Partitions - Single Disk) yet?