Failure to expand shrunken resizeable image from Linux machines
-
@xipher said in Failure to expand shrunken resizeable image from Linux machines:
The only commonality at the moment is that it’s Linux Mint 18.2 x64 being captured, and always with a swap at the start of the drive, then two logical partitions on a single extent after that
I will spin this test up in my home VM lab. I’m actually writing this post on on a LM 18.2 OS. I should be able to confirm that this is an issue later tonight.
-
@george1421 keen to hear what you find! I’ll take some pictures of what I run into myself, source material and what I get in the end.
Also, didn’t mean to sound off color with the Windows comment, just genuine curiosity if I had the wrong idea on things, bad tone on my part.
-
Whelp, I can’t seem to duplicate your error. That doesn’t mean anything other than I can’t duplicate your errors in my lab. I tested on both vSphere and also virtual box on my LM laptop.
I simply downloaded a fresh iso of LM 18.2 Mate (I needed that iso for a home project anyway) and installed on the source VM. My source VM I created a 50GB hard drive and installed Mint 18.2 on it. I used all default settings, just an easy and quick install without making any decisions other than password. Once installed I pxe booted the target, registered with my FOG-Pi server and captured the image.
I created a new VM with a 65GB hard drive (different size than source disk by design), pxe booted, registered and then deployed at the end of registration.
Here is what I have the image definition setup as
lm_source
Here is the output
df
andlsblk
of the source virtual machine.jondoe@jondoe-VirtualBox ~ $ df -h Filesystem Size Used Avail Use% Mounted on udev 474M 0 474M 0% /dev tmpfs 100M 3.6M 96M 4% /run /dev/sda1 49G 5.3G 41G 12% / tmpfs 496M 92K 496M 1% /dev/shm tmpfs 5.0M 4.0K 5.0M 1% /run/lock tmpfs 496M 0 496M 0% /sys/fs/cgroup cgmfs 100K 0 100K 0% /run/cgmanager/fs tmpfs 100M 4.0K 100M 1% /run/user/108 tmpfs 100M 20K 100M 1% /run/user/1000 jondoe@jondoe-VirtualBox ~ $ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sr0 11:0 1 1024M 0 rom sda 8:0 0 50G 0 disk ├─sda2 8:2 0 1K 0 part ├─sda5 8:5 0 1021M 0 part [SWAP] └─sda1 8:1 0 49G 0 part / jondoe@jondoe-VirtualBox ~ $
lm_target
jondoe@jondoe-VirtualBox ~ $ df -h Filesystem Size Used Avail Use% Mounted on udev 474M 0 474M 0% /dev tmpfs 100M 3.6M 96M 4% /run /dev/sda1 63G 5.3G 55G 9% / tmpfs 496M 92K 496M 1% /dev/shm tmpfs 5.0M 4.0K 5.0M 1% /run/lock tmpfs 496M 0 496M 0% /sys/fs/cgroup cgmfs 100K 0 100K 0% /run/cgmanager/fs tmpfs 100M 4.0K 100M 1% /run/user/108 tmpfs 100M 20K 100M 1% /run/user/1000 jondoe@jondoe-VirtualBox ~ $ ndoe@jondoe-VirtualBox ~ $ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sr0 11:0 1 1024M 0 rom sda 8:0 0 65G 0 disk ├─sda2 8:2 0 1K 0 part ├─sda5 8:5 0 1021M 0 part └─sda1 8:1 0 63.7G 0 part /
As you can see on the target computer, its hard drive did expand to fit the size of the physical disk, which happens to be larger than the source disk.
From looking at the output of lsblk you an see that LM didn’t use LVM when creating the disk.
-
So if I’m to gather things correctly, you’re attempting to go down in size from 50gb (capture system) disk to a 30gb (deploy system) disk?, What was the sizes when you said you deployed to a “larger” disk?
-
@tom-elliott Whoops I missed that bit about going smaller than the source disk. Let me queue up the test environment and see if I can duplicate that condition too.
During testing I also confirmed that shrinking the source image didn’t mess up the source computer. Everything appear to run ok on the source image.
-
I was using a 100gb disk in my VM environment when I was testing the 50gb captured drive.
I’m recreating my test environment now virtually, though I do have the following pictures from a more ‘meat-space’ environment I was working with today…
That was testing a 128gb drive to a 320gb drive, the picture on the bottom shows the settings for the image.
Sorry that they’re actual pictures, the system doing the images is not connected to the network, internet, etc and I didn’t have a flash drive to bring back with me at the time.
-
@george1421 Well setting the image to a smaller drive than the source DID successfully replicate the OP’s issue.
jondoe@jondoe-VirtualBox ~ $ df -h Filesystem Size Used Avail Use% Mounted on udev 474M 0 474M 0% /dev tmpfs 100M 3.6M 96M 4% /run /dev/sda1 5.9G 5.3G 296M 95% / tmpfs 496M 112K 496M 1% /dev/shm tmpfs 5.0M 4.0K 5.0M 1% /run/lock tmpfs 496M 0 496M 0% /sys/fs/cgroup cgmfs 100K 0 100K 0% /run/cgmanager/fs tmpfs 100M 20K 100M 1% /run/user/1000 jondoe@jondoe-VirtualBox ~ $ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sr0 11:0 1 1024M 0 rom sda 8:0 0 25G 0 disk └─sda1 8:1 0 6G 0 part / jondoe@jondoe-VirtualBox ~ $
Note there are missing partitions too.
-
@george1421 The whole ‘going to a smaller disk’ thing seems to be in relation to offsets being passed that exceed the ‘physical’ dimensions of the drive being restored to, see this prior gallery for the errors related.
-
OK, so, after a fair bit of testing and retesting, here is what I’ve found out.
First of all, the following image gallery I created of the issue explains things well:
The TL;DR version is this:
LVM breaks things something /fierce/ with Mint’s default layout. If you choose to use LVM and don’t set a custom partition layout, what you get from the 18.2 installer is whats in the screenshot and it will not expand properly.
Having the root partition anywhere but as the first and primary partition of the drive makes things break. In my case, the SWAP partition was always first in my images.
Having the root partition as the first and primary partition makes the expansion work, almost 100%, it left some space but its good enough for right now.
Important Note: In my testing tonight I ended up using FOG 1.5 RC5 since I /just/ rebuilt in tonight to do this testing. Prior testing was performed on FOG 1.4.4 which for me was producing stranger results than these.
Is this intended functionality?
-
I don’t understand. So it was your layout of how it applied the partitions to disk that caused the resizing to not happen properly? So expanding does work when laid out in a specific fashion?
I was able to replicate the results through 3 vms, one capture at 50GB, one for deploying to at 30GB, and one for deploying to at 100GB (to mimic what you described as closely as possible.)
I DID find some issues, and worked very diligently to try to fix those issues. I don’t think it has anything to do with placement of LVM, directly, rather because the extended partition was moving the LVM around and it was unable to find itself based on what was presented originally.
(See Here for more information, though not directly describing it might make sense).
https://github.com/FOGProject/fogproject/commit/635c5050904c3e29edd26151d96b7217318acf6cI was able to fix and deploy to all devices and the devices were able to boot without an issue and all showed the “expansion” had worked as well. (Well the capture system didn’t expand, but it was using the correct layout as to how it was originally configured. – Meaning it applied back exactly what should have been since it was the exact same disk. Prior to this it was expanding a tiny bit even to the same machine.)
I’ve updated the init’s. Would you be willing to give them a shot and see if the “originals” will now start working?
wget -O /var/www/fog/service/ipxe/init.xz https://fogproject.org/inits/init.xz wget -O /var/www/fog/service/ipxe/init_32.xz https://fogproject.org/inits/init_32.xz
I’m sorry it took so long to get anything, but hope you understand that we are very busy with our own things too.
-
@tom-elliott While I’m not using a LVM disk deploying a 50GB image to a 25GB target computer seemed to work. I still have a 1.4.4 build on my fog server. I just copied over the inits.
jondoe@jondoe-VirtualBox ~ $ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sr0 11:0 1 1024M 0 rom sda 8:0 0 25G 0 disk ├─sda2 8:2 0 1K 0 part ├─sda5 8:5 0 1021.8M 0 part └─sda1 8:1 0 24G 0 part / jondoe@jondoe-VirtualBox ~ $ df -h Filesystem Size Used Avail Use% Mounted on udev 474M 0 474M 0% /dev tmpfs 100M 3.6M 96M 4% /run /dev/sda1 24G 5.3G 18G 24% / tmpfs 496M 92K 496M 1% /dev/shm tmpfs 5.0M 4.0K 5.0M 1% /run/lock tmpfs 496M 0 496M 0% /sys/fs/cgroup cgmfs 100K 0 100K 0% /run/cgmanager/fs tmpfs 100M 20K 100M 1% /run/user/1000 jondoe@jondoe-VirtualBox ~ $
-
For the next test I spun up a 50GB ubuntu image (knowing the default layout is on lvm) and captured that. We all know since the disk layout is LVM FOG will capture the lvm partition as raw even for single disk resizable. That means the LVM partition is not resizable from within FOS at this point.
Here is the ubuntu reference image layout
Here is the ubuntu image deployed to a larger hard drive (65GB target vs 50GB reference image) target computer.
Here is the results of trying to deploy a 50GB captured image to a 25GB target computer:
-
@tom-elliott I’ll give it a shot today
Going to post up the screenshots of the output in the same style everyone else is first though to try and help explain and identify the original issue better, want to try and make sure everyone is on the same page by using the same style of output.
-
Ok, want to break down the results in the same sort of output I’m seeing used here for the source and destination of each attempt
Destination is always a 75gb machine which is always a bigger disk. The ‘smaller disk’ restore issue I think is already identified?
This is the ‘custom’ partitioned layout I was using yesterday which caused a problem:
Source:
Destination:
This is the ‘default’ layout Mint Mate 18.2 produces from its installer when LVM is selected:
Source:
Destination:
This is the ‘working’ solution wherein the root partition is the first partition on the drive:
Source:
Destination:
Something I noticed on all the machines though was that the SWAP partition ONLY mounted on the LVM destination machine…!? All other attempts without LVM lead to the destination machine not mounting its swap, really weird…
Hope this helps understand what I’m running into and I’ll try the patched init today, though I have to run off for a bit at the moment, sorry!
-
@xipher I believe the swap mount is unrelated to where the partition actually sits, rather Linux Mint seems to set the swap by uuid. While we do try to reset the UUID, this isn’t working and I’m not overly worried about fixing the UUID for SWAP partitions anyway. (Why you ask?) Because you can edit the /etc/fstab, change the swap partition from the UUID to the actual location. In my case /dev/sda5 is the drive, so replace the UUID bit with /dev/sda5 and all works properly on reboots.
-
@tom-elliott Sorry it took so long, I put the new init in, but the results were identical sadly :C
Also, totally agree on the SWAP partition issue, its a change I’ll make on the client to capture.
-
@xipher what do you mean identical? Of course my test boxes are using the default layout of linux mint, but all are appropriately resizing where my tests yesterday before changes failed in almost the exact same way you described.
-
Also, you did update the init first? No need to rerun installer for these tests as it will write the “bad” inits if done.
-
Ran the two commands in your prior post as root:
wget -O /var/www/fog/service/ipxe/init.xz https://fogproject.org/inits/init.xz
wget -O /var/www/fog/service/ipxe/init_32.xz https://fogproject.org/inits/init_32.xzRecaptured all 3 images, then casted them to the test system one after the other and recorded the results.
Didn’t re-run the installer or anything else, system wasn’t rebooted, etc, pretty much just replaced the init per the above command provided, recaptured, recast.
I could try providing a VM image of one of the systems I’m capturing from that exhibits the issue? Could provide the actual images too.
-
I was just looking at my ubuntu example and I found something not right. The image is LVM so FOG can only copy that in raw mode. So no expansion can happen (this is A reason for using standard partitions). But when I went from a 50GB original image to a 65GB target computer is grew the /boot partition to by 15GB (the difference between 50 and 65). I would have thought that the /boot partition would be a fixed size partition (as with MS Windows). I created a new LM master image based on LVM and it happened the same as with ubuntu. My original intent was to have some extra space in the physical disk so I could test some LVM expansion tools.
Linux Mint with LVM defaults.
jondoe@jondoe-VirtualBox ~ $ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sr0 11:0 1 1024M 0 rom sda 8:0 0 65G 0 disk ├─sda2 8:2 0 1K 0 part ├─sda5 8:5 0 49.5G 0 part │ ├─mint--vg-swap_1 253:1 0 1G 0 lvm [SWAP] │ └─mint--vg-root 253:0 0 48.5G 0 lvm / └─sda1 8:1 0 15.5G 0 part /boot jondoe@jondoe-VirtualBox ~ $ df -h Filesystem Size Used Avail Use% Mounted on udev 474M 0 474M 0% /dev tmpfs 100M 3.6M 96M 4% /run /dev/mapper/mint--vg-root 48G 5.0G 41G 11% / tmpfs 496M 168K 496M 1% /dev/shm tmpfs 5.0M 4.0K 5.0M 1% /run/lock tmpfs 496M 0 496M 0% /sys/fs/cgroup /dev/sda1 15G 67M 15G 1% /boot cgmfs 100K 0 100K 0% /run/cgmanager/fs tmpfs 100M 4.0K 100M 1% /run/user/108 tmpfs 100M 20K 100M 1% /run/user/1000 jondoe@jondoe-VirtualBox ~ $