@aurfalien Hmmm, I suppose and probably need help in this regard (anyone with ideas to fix)
The issue isn’t the UUID direclty, though it is definitely a play into it. I always forget.
The /etc/fstab is using UUID’s inplace of the drive lettering. This makes perfect sense when you have to consider NVME/SSD/USB drives not always getting the same label since it’s a first come first serve issue.
The “fix” in it’s simplest form, used to be to edit /etc/fstab file so that you use the right labels instead of the UUID.
However, this may or may not vary depending on the system but we have funcitons that could help automate that.
I apologize for overlooking that bit as well.
Ultimately:
We can get the uuid using blkid command.
An example script (thanks Chat GPT for the assist
)
#!/bin/bash
echo "# Generated fstab using UUIDs"
echo "# <file system> <mount point> <type> <options> <dump> <pass>"
# Get UUID, Device, and Type
while read -r device uuid type; do
# Get the mount point using lsblk
mountpoint=$(lsblk -no MOUNTPOINT "$device" | head -n 1)
# If mountpoint is empty, use "unmounted"
[[ -z "$mountpoint" ]] && mountpoint="unmounted"
# Print fstab-style entry
echo "UUID=$uuid $mountpoint $type defaults 0 2"
done < <(blkid -o export | awk -F= '/^DEVNAME/ {dev=$2} /^UUID=/ {uuid=$2} /^TYPE=/ {type=$2} dev && uuid && type {print dev, uuid, type; dev=uuid=type=""}')
Basically this is probably more than what’s wanted, but I would impore you to test something like this and use a post-download script for testing.
If you’re willing/able to adjust a bit. It likely will require getting the Harddrive you imaged (getHarddisk from funcs.sh) and testing the d1.partitions against what was actually deployed ot update the internal /etc/fstab of the drive to use the UUID’s in a more dynamic approach.
In the mean time, if you know your system is consistently booting with the same file label (/dev/sda or /dev/nvmen0 or whatever) then modify the UUID items appropriately for your filesystems and it should boot just fine.