First partition is not resizing when deploying an image
I see similar problems in several places but every one that I’ve looked at didn’t have any information that I was able to apply to my situation.
This is FOG 1.5.4, running on a CentOS 7 server, trying to deploy a Fedora 28 image (with non-LVM partitioning).
I captured the image and it will deploy and works just fine. But when it deploys, /dev/sda1 (/ on the image) is not resizing at all - it’s almost completely full, with something like 100 mb free (this is actually smaller than was on the machine it captured from). /dev/sda2 (/home) is being resized to take all the remaining space.
What I would like is for one of two things - either /dev/sda1 increases in size proportionately to /dev/sda2, or I would like to give /dev/sda1 a fixed size of say 50GB and have the remainder of available space on /dev/sda2. Either way is fine, whichever can be accomplished most easily.
Currently, on the fog server my d1.fixed_size_partitions file is empty, and the other d1.* files are as follows:
## d1.minimum.partitions label: dos label-id: 0x9f99f7ab device: /dev/sda unit: sectors /dev/sda1 : start= 2048, size= 43748073, type=83, bootable /dev/sda2 : start= 390707200, size= 4852470, type=83
## d1.original.fstypes /dev/sda1 extfs /dev/sda2 extfs
## d1.partitions label: dos label-id: 0x9f99f7ab device: /dev/sda unit: sectors /dev/sda1 : start= 2048, size= 44439108, type=83, bootable /dev/sda2 : start= 390707200, size= 1562817536, type=83
@Sebastian-Roth I actually forgot to grab the df -h output before resizing things, but after resizing and re-capturing, it seems to be deploying correctly without consuming all the free space. So this can be closed now I think, at least as far as it affects me. Thanks for the assistance.
I can just manually resize sda1 on that that and capture it again. Would that solve this whole problem?
I am fairly sure it would! Sure depends a bit on the whole partition layout you have (possible issues) but you are be able to influence the resulting partitions by modifying the master ones.
@Sebastian-Roth Thanks. I suppose it could be because of the way the image was captured - it was originally created as a kvm virtual machine, captured from there, and deployed to a laptop, but there was a problem and the original image was lost, so after making necessary adjustments it was re-captured from that laptop. I suppose the storage issue could stem from that, since it wasn’t “real” free space on the vm (at least, insofar as the unused space was not actually taking up any space on the physical disk).
The server is at work, I’ll get that output tomorrow and post it here. But something I thought of as I read your reply, the reference laptop I’m now using to build the image, I can just manually resize sda1 on that that and capture it again. Would that solve this whole problem?
@bordercollie Interesting one. From the figures you posted it seems like on the original system where you capture the image from the first partition is very full already. I say this because the size of sda1 in
d1.minimum.partitionsis not much smaller than in
d1.partitions. Calculating the size: 44439108 * 512 byte per sector / 1024 / 1024 / 1024 ~ 21 GB for sda1, is that correct?
So just from the figures to me it looks like FOG is not shrinking your sda1 partition but would simply deploy it in a similar size but the partition itself is just pretty much full to begin with - maybe that’s not correct but that’s what the numbers make me think. Please run
df -hon the machine before capturing the image.
Sure we do a little bit of fiddling with the numbers on resizable images and therefore it is actually possible that you end up with sda1 being a tiny bit smaller than it was originally.
I fully understand your intention of having sda1 expanded to lets say 50 GB or some proportion of the destination disk size. Unfortunately we don’t have the functionality to adjust resize options manually using FOG image settings at the moment. It’s definitely something I would like to add in the future but there are so many other things higher up in the todo list I suppose.
I just looked at the figures again to come up with a quick fix for you and noticed something strange. Usually partitions follow one another pretty closely. So if sda1 starts at 2048 and has a size of 44439108 I would expect sda2 to start at 44441156 or a couple sectors further. But in your case sda2 starts at sector 390707200 - 186 GB from the start of the disk while sda1 only takes up good 21 GB of that space and the other 165 GB are just lost unused space between sda1 and sda2.
When you boot up your original machine to get the disk usage (
df -h), please run
fdisk -l /dev/sdaas well and post the full output here.