Odd capture and deploy disk size observations.
-
Hello,
I’m noticing something odd in terms of capturing and deploying images.I’m capturing and deploying Linux in this case.
While things are working, I wanted to bring this up in the event that I don’t understand how Fog is supposed to work.
I’ve gone with Fog defaults in terms of;
Image Type = Single Disk - Resizable
Partition = EverythingHowever I’ve captured a system having a 450GB drive with a 450GB partition having 100GB used.
When I deploy that very image to a different system having a 400GB drive, I get the error that the source drive is larger then the target drive and cannot continue.
Conversely when I take that same captured image and deploy it to a system with 1TB drive, it creates a 450GB partition leaving roughly 50% of the drive not partitioned.
Am I misunderstanding how Fog works?
-
I’ve found and read the following link below and feel that I should not be encountering the issues stated.
However I am using XFS with no LVM containers of sorts on my source image so could that be an issue?
There is a blurb about using ext2, ext3, ext4 where XFS is not mentioned but that’s under the section Multiple Partition Image - Single Disk (Not Resizable).
https://docs.fogproject.org/en/latest/management/web/images/#image-type
-
@aurfalien I’m not sure what you’re asking.
Are you sseein git say that it’s going to make the disk and resize the partitions? Or does it ahve something like ‘Not expanding’?
It should be noted we can only expand XFS partitions, but it’s a best effort thing and if memory serves a relatively new addition at that.
You could try a deploy debug and get us the /tmp/xfslog.txt file from the client machine that may give us more information.
This is the code (lines 322 - 369) that deals with xfs on deploy and resizable:
xfs) if [[ $type == "down" ]]; then dots "Attempting to resize $fstype volume ($part)" # XFS partitions can only be expanded when there is free space after that partition. # Retrieving the partition number of a XFS partition that has free space after it. local xfsPartitionNumberThatCanBeExpanded=$(parted -s -a opt $disk "print free" | grep -i "free space" -B 1 | grep -i "xfs" | cut -d ' ' -f2) local currentPartitionNumber=$(echo $part | grep -o '[0-9]*$') if [[ "$xfsPartitionNumberThatCanBeExpanded" == "$currentPartitionNumber"a ]]; then parted -s -a opt $disk "resizepart $xfsPartitionNumberThatCanBeExpanded 100%" >>/tmp/xfslog.txt 2>&1 if [[ $? -gt 0 ]]; then echo "Failed" debugPause handleError "Could not resize partition $part (${FUNCNAME[0]})\n Info: $(cat /tmp/xfslog.txt)\n Args Passed: $*" fi if [[ ! -d /tmp/xfs ]]; then mkdir /tmp/xfs >>/tmp/xfslog.txt 2>&1 if [[ $? -gt 0 ]]; then echo "Failed" debugPause handleError "Could not create /tmp/xfs (${FUNCNAME[0]})\n Info: $(cat /tmp/xfslog.txt)\n Args Passed: $*" fi fi mount -t xfs $part /tmp/xfs >>/tmp/xfslog.txt 2>&1 if [[ $? -gt 0 ]]; then echo "Failed" debugPause handleError "Could not mount $part to /tmp/xfs (${FUNCNAME[0]})\n Info: $(cat /tmp/xfslog.txt)\n Args Passed: $*" fi xfs_growfs $part >>/tmp/xfslog.txt 2>&1 if [[ $? -gt 0 ]]; then echo "Failed" debugPause handleError "Could not grow XFS partition $part (${FUNCNAME[0]})\n Info: $(cat /tmp/xfslog.txt)\n Args Passed: $*" fi umount /tmp/xfs >>/tmp/xfslog.txt 2>&1 if [[ $? -gt 0 ]]; then echo Failed debugPause handleError "Could not unmount $part from /tmp/xfs (${FUNCNAME[0]})\n Info: $(cat /tmp/xfslog.txt)\n Args Passed: $*" fi echo "Done" else echo "Failed, XFS partition cannot be expanded" fi fi ;;```
-
@Tom-Elliott Thanks Tom, super appreciate the reply.
Rather then focus on XFS because to me that’s more of a file system best left to backending an NFS share for global access, I can always create an EXT2, EXT3 or EXT4 based OS and deploy that.
However my question would be; Can a non LVM EXT based file system be resized smaller and larger when being deployed?
Or should the EXT file system be under an LVM topology for resizing?
I’m not tied to XFS for a workstation OS by any means.
-
@aurfalien so ext filesystems are supposed to be able to be shrunk and expanded as required. They are, at least as of right now, the only Linux Filesystem capable of this and (as much as can be with FOSS) supported by the FOG team of things.
LVM support would be something I’d love to be able to add. There was a program that mimicked FOG (mimicked and used FOG’s open source nature in their system) but also had support for LVM detection, expansion, and (as far as I recall) shrinking called CloneDeploy a few years ago. I don’t know if that program is still being supported by its developer(s) but I was never able to sit down long enough to figure out how it was operating. I did use their baseline to start trying to incorporate some segments but never really got it out the door or tested. So as of this time, no LVM is not really supported for resizable drives at this point. The underlying filesystems (ntfs if you could, ext, etc…) are, but reading LVM is pretty dynamically prone which is part of how I couldn’t wrap my head around how best to do it.
That’s a lot of words to say, yes, EXT should work for both expansion and shrinking and last time I knew, this works and has for quite some time.
-
@Tom-Elliott Thank you once again for a very informative reply. I’ll go ahead and use EXT as my OS file system and report back about expanding/contracting file system deployment in Fog. But I’m sure that it will work as Fog has nothing really to do with filesystem.
-
@Tom-Elliott Hellloooo,
Soooo, after using EXT4 on my system that then was captured, and deploying to a different system, I boot into emergency mode due to the boot disks UUID not being found.Are there any special things that need to be done before capturing a system using EXT4? Like any flags that should be set during the initial image creation before a capture takes place?
I can deploy an XFS based image which does not shrink or expand but it seems as though I cannot deploy an EXT4 based image which shrinks and some how a changing of disk UUIDs?
-
@aurfalien Hmmm, I suppose and probably need help in this regard (anyone with ideas to fix)
The issue isn’t the UUID direclty, though it is definitely a play into it. I always forget.
The /etc/fstab is using UUID’s inplace of the drive lettering. This makes perfect sense when you have to consider NVME/SSD/USB drives not always getting the same label since it’s a first come first serve issue.
The “fix” in it’s simplest form, used to be to edit /etc/fstab file so that you use the right labels instead of the UUID.
However, this may or may not vary depending on the system but we have funcitons that could help automate that.
I apologize for overlooking that bit as well.
Ultimately:
We can get the uuid using blkid command.
An example script (thanks Chat GPT for the assist )
#!/bin/bash echo "# Generated fstab using UUIDs" echo "# <file system> <mount point> <type> <options> <dump> <pass>" # Get UUID, Device, and Type while read -r device uuid type; do # Get the mount point using lsblk mountpoint=$(lsblk -no MOUNTPOINT "$device" | head -n 1) # If mountpoint is empty, use "unmounted" [[ -z "$mountpoint" ]] && mountpoint="unmounted" # Print fstab-style entry echo "UUID=$uuid $mountpoint $type defaults 0 2" done < <(blkid -o export | awk -F= '/^DEVNAME/ {dev=$2} /^UUID=/ {uuid=$2} /^TYPE=/ {type=$2} dev && uuid && type {print dev, uuid, type; dev=uuid=type=""}')
Basically this is probably more than what’s wanted, but I would impore you to test something like this and use a post-download script for testing.
If you’re willing/able to adjust a bit. It likely will require getting the Harddrive you imaged (getHarddisk from funcs.sh) and testing the d1.partitions against what was actually deployed ot update the internal /etc/fstab of the drive to use the UUID’s in a more dynamic approach.
In the mean time, if you know your system is consistently booting with the same file label (/dev/sda or /dev/nvmen0 or whatever) then modify the UUID items appropriately for your filesystems and it should boot just fine.