Failure to expand shrunken resizeable image from Linux machines
-
@tom-elliott Whoops I missed that bit about going smaller than the source disk. Let me queue up the test environment and see if I can duplicate that condition too.
During testing I also confirmed that shrinking the source image didn’t mess up the source computer. Everything appear to run ok on the source image.
-
I was using a 100gb disk in my VM environment when I was testing the 50gb captured drive.
I’m recreating my test environment now virtually, though I do have the following pictures from a more ‘meat-space’ environment I was working with today…
That was testing a 128gb drive to a 320gb drive, the picture on the bottom shows the settings for the image.
Sorry that they’re actual pictures, the system doing the images is not connected to the network, internet, etc and I didn’t have a flash drive to bring back with me at the time.
-
@george1421 Well setting the image to a smaller drive than the source DID successfully replicate the OP’s issue.
jondoe@jondoe-VirtualBox ~ $ df -h Filesystem Size Used Avail Use% Mounted on udev 474M 0 474M 0% /dev tmpfs 100M 3.6M 96M 4% /run /dev/sda1 5.9G 5.3G 296M 95% / tmpfs 496M 112K 496M 1% /dev/shm tmpfs 5.0M 4.0K 5.0M 1% /run/lock tmpfs 496M 0 496M 0% /sys/fs/cgroup cgmfs 100K 0 100K 0% /run/cgmanager/fs tmpfs 100M 20K 100M 1% /run/user/1000 jondoe@jondoe-VirtualBox ~ $ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sr0 11:0 1 1024M 0 rom sda 8:0 0 25G 0 disk └─sda1 8:1 0 6G 0 part / jondoe@jondoe-VirtualBox ~ $
Note there are missing partitions too.
-
@george1421 The whole ‘going to a smaller disk’ thing seems to be in relation to offsets being passed that exceed the ‘physical’ dimensions of the drive being restored to, see this prior gallery for the errors related.
-
OK, so, after a fair bit of testing and retesting, here is what I’ve found out.
First of all, the following image gallery I created of the issue explains things well:
The TL;DR version is this:
LVM breaks things something /fierce/ with Mint’s default layout. If you choose to use LVM and don’t set a custom partition layout, what you get from the 18.2 installer is whats in the screenshot and it will not expand properly.
Having the root partition anywhere but as the first and primary partition of the drive makes things break. In my case, the SWAP partition was always first in my images.
Having the root partition as the first and primary partition makes the expansion work, almost 100%, it left some space but its good enough for right now.
Important Note: In my testing tonight I ended up using FOG 1.5 RC5 since I /just/ rebuilt in tonight to do this testing. Prior testing was performed on FOG 1.4.4 which for me was producing stranger results than these.
Is this intended functionality?
-
I don’t understand. So it was your layout of how it applied the partitions to disk that caused the resizing to not happen properly? So expanding does work when laid out in a specific fashion?
I was able to replicate the results through 3 vms, one capture at 50GB, one for deploying to at 30GB, and one for deploying to at 100GB (to mimic what you described as closely as possible.)
I DID find some issues, and worked very diligently to try to fix those issues. I don’t think it has anything to do with placement of LVM, directly, rather because the extended partition was moving the LVM around and it was unable to find itself based on what was presented originally.
(See Here for more information, though not directly describing it might make sense).
https://github.com/FOGProject/fogproject/commit/635c5050904c3e29edd26151d96b7217318acf6cI was able to fix and deploy to all devices and the devices were able to boot without an issue and all showed the “expansion” had worked as well. (Well the capture system didn’t expand, but it was using the correct layout as to how it was originally configured. – Meaning it applied back exactly what should have been since it was the exact same disk. Prior to this it was expanding a tiny bit even to the same machine.)
I’ve updated the init’s. Would you be willing to give them a shot and see if the “originals” will now start working?
wget -O /var/www/fog/service/ipxe/init.xz https://fogproject.org/inits/init.xz wget -O /var/www/fog/service/ipxe/init_32.xz https://fogproject.org/inits/init_32.xz
I’m sorry it took so long to get anything, but hope you understand that we are very busy with our own things too.
-
@tom-elliott While I’m not using a LVM disk deploying a 50GB image to a 25GB target computer seemed to work. I still have a 1.4.4 build on my fog server. I just copied over the inits.
jondoe@jondoe-VirtualBox ~ $ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sr0 11:0 1 1024M 0 rom sda 8:0 0 25G 0 disk ├─sda2 8:2 0 1K 0 part ├─sda5 8:5 0 1021.8M 0 part └─sda1 8:1 0 24G 0 part / jondoe@jondoe-VirtualBox ~ $ df -h Filesystem Size Used Avail Use% Mounted on udev 474M 0 474M 0% /dev tmpfs 100M 3.6M 96M 4% /run /dev/sda1 24G 5.3G 18G 24% / tmpfs 496M 92K 496M 1% /dev/shm tmpfs 5.0M 4.0K 5.0M 1% /run/lock tmpfs 496M 0 496M 0% /sys/fs/cgroup cgmfs 100K 0 100K 0% /run/cgmanager/fs tmpfs 100M 20K 100M 1% /run/user/1000 jondoe@jondoe-VirtualBox ~ $
-
For the next test I spun up a 50GB ubuntu image (knowing the default layout is on lvm) and captured that. We all know since the disk layout is LVM FOG will capture the lvm partition as raw even for single disk resizable. That means the LVM partition is not resizable from within FOS at this point.
Here is the ubuntu reference image layout
Here is the ubuntu image deployed to a larger hard drive (65GB target vs 50GB reference image) target computer.
Here is the results of trying to deploy a 50GB captured image to a 25GB target computer:
-
@tom-elliott I’ll give it a shot today
Going to post up the screenshots of the output in the same style everyone else is first though to try and help explain and identify the original issue better, want to try and make sure everyone is on the same page by using the same style of output.
-
Ok, want to break down the results in the same sort of output I’m seeing used here for the source and destination of each attempt
Destination is always a 75gb machine which is always a bigger disk. The ‘smaller disk’ restore issue I think is already identified?
This is the ‘custom’ partitioned layout I was using yesterday which caused a problem:
Source:
Destination:
This is the ‘default’ layout Mint Mate 18.2 produces from its installer when LVM is selected:
Source:
Destination:
This is the ‘working’ solution wherein the root partition is the first partition on the drive:
Source:
Destination:
Something I noticed on all the machines though was that the SWAP partition ONLY mounted on the LVM destination machine…!? All other attempts without LVM lead to the destination machine not mounting its swap, really weird…
Hope this helps understand what I’m running into and I’ll try the patched init today, though I have to run off for a bit at the moment, sorry!
-
@xipher I believe the swap mount is unrelated to where the partition actually sits, rather Linux Mint seems to set the swap by uuid. While we do try to reset the UUID, this isn’t working and I’m not overly worried about fixing the UUID for SWAP partitions anyway. (Why you ask?) Because you can edit the /etc/fstab, change the swap partition from the UUID to the actual location. In my case /dev/sda5 is the drive, so replace the UUID bit with /dev/sda5 and all works properly on reboots.
-
@tom-elliott Sorry it took so long, I put the new init in, but the results were identical sadly :C
Also, totally agree on the SWAP partition issue, its a change I’ll make on the client to capture.
-
@xipher what do you mean identical? Of course my test boxes are using the default layout of linux mint, but all are appropriately resizing where my tests yesterday before changes failed in almost the exact same way you described.
-
Also, you did update the init first? No need to rerun installer for these tests as it will write the “bad” inits if done.
-
Ran the two commands in your prior post as root:
wget -O /var/www/fog/service/ipxe/init.xz https://fogproject.org/inits/init.xz
wget -O /var/www/fog/service/ipxe/init_32.xz https://fogproject.org/inits/init_32.xzRecaptured all 3 images, then casted them to the test system one after the other and recorded the results.
Didn’t re-run the installer or anything else, system wasn’t rebooted, etc, pretty much just replaced the init per the above command provided, recaptured, recast.
I could try providing a VM image of one of the systems I’m capturing from that exhibits the issue? Could provide the actual images too.
-
I was just looking at my ubuntu example and I found something not right. The image is LVM so FOG can only copy that in raw mode. So no expansion can happen (this is A reason for using standard partitions). But when I went from a 50GB original image to a 65GB target computer is grew the /boot partition to by 15GB (the difference between 50 and 65). I would have thought that the /boot partition would be a fixed size partition (as with MS Windows). I created a new LM master image based on LVM and it happened the same as with ubuntu. My original intent was to have some extra space in the physical disk so I could test some LVM expansion tools.
Linux Mint with LVM defaults.
jondoe@jondoe-VirtualBox ~ $ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sr0 11:0 1 1024M 0 rom sda 8:0 0 65G 0 disk ├─sda2 8:2 0 1K 0 part ├─sda5 8:5 0 49.5G 0 part │ ├─mint--vg-swap_1 253:1 0 1G 0 lvm [SWAP] │ └─mint--vg-root 253:0 0 48.5G 0 lvm / └─sda1 8:1 0 15.5G 0 part /boot jondoe@jondoe-VirtualBox ~ $ df -h Filesystem Size Used Avail Use% Mounted on udev 474M 0 474M 0% /dev tmpfs 100M 3.6M 96M 4% /run /dev/mapper/mint--vg-root 48G 5.0G 41G 11% / tmpfs 496M 168K 496M 1% /dev/shm tmpfs 5.0M 4.0K 5.0M 1% /run/lock tmpfs 496M 0 496M 0% /sys/fs/cgroup /dev/sda1 15G 67M 15G 1% /boot cgmfs 100K 0 100K 0% /run/cgmanager/fs tmpfs 100M 4.0K 100M 1% /run/user/108 tmpfs 100M 20K 100M 1% /run/user/1000 jondoe@jondoe-VirtualBox ~ $
-
@george1421 Close to my findings wherein the first partition (sda1) will grow proportionally, but the rest… no go.
If you create the same layout but have the root partition where you currently have the boot partition, it will grow the root and ‘work’ as it seems it would be intended.
If you were to make a home partition in the same place, it would grow but not the boot or root… etc
-
@george1421 It only expands partitions that are “expandable” and not “fixed”. Meaning, if you have 1 ext4 partition, and it shrinks, (say boot here), it will try to expand the partitions somewhat evenly. However, this is not a perfect system, nor has it ever really been. I’ve tried for all I’m worth, but I’m only one man, sorry.
-
Great you all have been heavily working on this. Looks like we are getting there although the initially posted disk layout (sda1=root, sda2=extended, sda5=swap) has changed. From the latest pictures posted it looks as if the expansion is still not working for this kind of layout (SWAP being on sda1).
I was just able to replicate this issue. But I didn’t have the time to find what’s causing the issue in our script:
$ cat in1.txt label: dos label-id: 0x77265efa device: /dev/sda unit: sectors /dev/sda1 : start= 2048, size= 3940352, type=82, bootable /dev/sda2 : start= 3942400, size= 69457920, type=5 /dev/sda5 : start= 3944448, size= 3940352, type=83 /dev/sda6 : start= 7886848, size= 65513472, type=83 $ ./procsfdisk.awk -v SECTOR_SIZE=512 -v CHUNK_SIZE=512 -v MIN_START=2048 -v action=filldisk -v target=/dev/sda -v sizePos=146800640 -v diskSize=146800640 -v fixedList=1:5 in1.txt ERROR: New start and size (150960130) on (/dev/sda1) is larger than the disk (146800640). # ERROR in new partition table, quitting. # ERROR: /dev/sda5 has an overlap. label: dos label-id: 0x77265efa device: /dev/sda unit: sectors /dev/sda1 : start= 2048, size= 3940352, type=82, bootable /dev/sda2 : start= 3944448, size= 142856192, type=5 /dev/sda5 : start= 142856194, size= 8103936, type=83 /dev/sda6 : start= 142856194, size= 134747648, type=83
From my point of view the output of
sfdisk -d /dev/sda
is much more helpful thandf -h
. -
@sebastian-roth said in Failure to expand shrunken resizeable image from Linux machines:
./procsfdisk.awk -v SECTOR_SIZE=512 -v CHUNK_SIZE=512 -v MIN_START=2048 -v action=filldisk -v target=/dev/sda -v sizePos=146800640 -v diskSize=146800640 -v fixedList=1:5 in1.txt
Based on the most current this is what I see:
./procsfdisk.awk -v SECTOR_SIZE=512 -v CHUNK_SIZE=512 -v MIN_START=2048 -v action=filldisk -v target=/dev/sda -v sizePos=146800640 -v diskSize=146800640 -v fixedList=1:5 in1.txt # Partition table is consistent. label: dos label-id: 0x77265efa device: /dev/sda unit: sectors /dev/sda1 : start= 2048, size= 3940352, type=82, bootable /dev/sda2 : start= 3944446, size= 71485442, type=5 /dev/sda5 : start= 3944448, size= 3940352, type=83 /dev/sda6 : start= 7884800, size= 67425792, type=83