Error deploying an existing image to SSD drive
-
Trying to deploy an image to an SSD drive that is smaller than the original system hard drive - this machine have 2 drives: 1 SSD (for faster performance) and a 1TB HDD SATA drive. I have temporarily disabled the SATA drive and trying to deploy my image onto a SATA drive from an existing image (same machine and hardware type, minus the SSD drive) which is single drive - resizeable. Initially I am getting an exit error 4. Seeing how this is an SSD drive, I have gone into the host machine setup and am using /dev/nmve0n1. But now I am getting an additional error: “failed to read back partitions””
I am using FoG 1.6Nowq the tricky part is, the SSD drive is 256GB and the original SATA drive this image came from is a 500GB drive. The image itself is less than 60GB (39GB I believe) so not sure why single drive resizeable won’t work here? Time is of the essence here, so any help would be greatly appreciated!
-
-
-
Lets start out with rescheduling a capture or deploy but tick the check box that says debug before you submit the task.
Then pxe boot the target computer after a few screens of text you need to clear with the enter key you should be dropped to a linux command prompt, at that linux command prompt key in the following
lsblk
and grab a screen shot of the output.The issue I’m having is with the disk driver called /dev/md0 that would indicate a raid controller.
-
@Jim-Holcomb Ah OK it looks like you beat me to it.
So lets get a screen shot of the host management page for this target computer. We need to understand where that /dev/md0 is coming from.
-
-
-
@Jim-Holcomb If you look on the fog server in /images/aiov910 directory does d1.mbr exist?
-
d1.partitions:
label: gpt label-id: B4C26CFC-6D5A-4CE1-94D1-0F4E18D8D657 device: /dev/nvme0n1 unit: sectors first-lba: 34 last-lba: 500118158 /dev/nvme0n1p1 : start= 2048, size= 532480, type=C12A7328-F81F-11D2-BA4B-00A0C93EC93B, uuid=79E1F6C3-BEF0-$ /dev/nvme0n1p2 : start= 534528, size= 32768, type=E3C9E316-0B5C-4DB8-817D-F92DF00215AE, uuid=55A3D53F-A47D-$ /dev/nvme0n1p3 : start= 567296, size= 497502208, type=EBD0A0A2-B9E5-4433-87C0-68B6B72699C7, uuid=622E9706-DECA-$ /dev/nvme0n1p4 : start= 498069504, size= 2048000, type=DE94BBA4-06D1-4D40-A16A-BFD50179D6AC, uuid=524E378B-F6E3-$
its failing to restore the mbr partition.
Captured image is from a 500GB sata, attempting to restore to a 250GB nvme drive.
Question would block size be an issue where the sata would be 512B and the nvme would be 4K (guess), would that cause the mbr fail to deploy?
-
@george1421 said in Error deploying an existing image to SSD drive:
If you look on the fog server in /images/aiov910 directory does d1.mbr exist?
Just to add a little bit here as some of the information is being exchanged in chat as well… The
d1.mbr
file is there. But from what I see so far it doesn’t seem to be an issue of disk being too small. Why am I saying this? Let’s do the calculations:( 498069504 + 2048000 )
(nvme0n1p4 start + size is also roughly the value of last-lba in the header)* 512
(byte sector size)/ 1024 / 1024 / 1024 = 238.5 GB
-> I can’t imagine this image being taken from a 500 GB or 1 TB disk. While you can have the partitions not filling the disk we wouldn’t see the a similar last-lba value with the disk being much larger. The source disk being 4k sector size is possible but very little likely because of first-lba being 34 and nvme0n1p1 starting at 2048.@Jim-Holcomb So then let’s see why it actually fails on nvme0n1 as well. Please schedule another debug deploy task, step through it till you hit the error and then manually run the printed command:
sgdisk -gl /images/aiov019/d1.mbr /dev/nvme0n1
-> Take a picture of the messages on screen and post here! -
@Jim-Holcomb said in Error deploying an existing image to SSD drive:
The image itself is less than 60GB (39GB I believe) so not sure why single drive resizeable won’t work here?
The answer to this question is that the last partition is getting in the way. There is currently no moving of partitions done by FOG and as such that last partition is essentially preventing resizable from working together.
Move that partition to before the big windows partition, capture again and it will work as expected for sure.
-
@Quazz While I am not exactly sure till we see the error message I would assume this is not the issue here (see my size calculations).
-
@Sebastian-Roth Perhaps not for the MBR issue; but certainly for the resize issue.