Failed to Set Disk GUID (sgdisk -U) (restoreUUIDinformation)
-
@austinjt01 Well… I don’t have an answer why it works on everything except this laptop model.
On your freenas box you then will need the ftp service running with a user ID and password you listed in the storage node definition configured to allow the fog client to move the files. If the ftp service isn’t configured correctly your images will stay in /images/dev/<mac_address> (you will need to translate that path into your environment) and not be renamed to /images/<image_name>.
While its a little off-point and also not supported, here is a tutorial to make a synology NAS function as a FOG Storage Node: https://forums.fogproject.org/topic/9430/synology-nas-as-fog-storage-node
-
@george1421 @Tom-Elliott I am still wondering if the “No space left on device” was caused by another error and it’s not actually the rootfs being too small. I need to double check when booted into FOS later on but I think we don’t actually have a space issue. Should we still increase root fs size?
-
@Sebastian-Roth The “no space” is in regards to building the registry file we use to set the hostname on the machine. This happens in the /tmp space on FOS. There’s possibly something else causing the no space left (meaning it’s filling the initfs rather than the disk itself.) For example, postdownload trying to install drivers after mounting to /mnt (but the mount doesn’t actually happen)
So it may not be something we directly caused, but it is something that occurred and is related to the initfs being filled up.
-
@Tom-Elliott said in Failed to Set Disk GUID (sgdisk -U) (restoreUUIDinformation):
There’s possibly something else causing the no space left (meaning it’s filling the initfs rather than the disk itself.)
I am farily sure this was the case here. I just booted a client into FOS and see that we still have 4.7 MB of free space. Doesn’t sound like much but that’s heaps for a couple of text files we generate.
I was just wondering if adding more space will cause us more trouble than it helps. More often than not I forget to tell people to update FOG settings when using the new init. Sure we can update that value in the next release but I am wondering if it’s worth it. On the other hand updating it a fair bit now will prevent us from issues that might come at some point when we keep adding to the inits little by little.
-
@Sebastian-Roth said in Failed to Set Disk GUID (sgdisk -U) (restoreUUIDinformation):
I just booted a client into FOS and see that we still have 4.7 MB of free space
Didn’t we update this initfs to 256MB with this change to the build root config file? If I remember correctly the fog default was 100MB.
BR2_TARGET_ROOTFS_EXT2_SIZE="256M"
I’ll have to boot my last initrd file to see the free space, but 4MB sounds small. I might understand if its tempfs that is out of space. I think I remember that defaults to 4MB. That might be the space we are having issues with.
-
@george1421 said in Failed to Set Disk GUID (sgdisk -U) (restoreUUIDinformation):
Didn’t we update this initfs to 256MB with this change to the build root config file? If I remember correctly the fog default was 100MB.
Absolutely right but I have that feeling that this might cause us some trouble in the future when we tell people to manually update the inits to get some new fix and they’ll end in a kernel panic. Sure this is something we can fix quickly by telling them to update the size setting in the web UI but it’s kind of anoying. On the other hand we probably will need to push up the size at some point anyway - that or I need to work through the whole config and see what we can get rid of (e.g. toss out partimage support at some point in time) to free some space.
I’ll have to boot my last initrd file to see the free space, but 4MB sounds small.
4.7 MB is free with the old 100 MB sized initrd. I think that’s enough at the moment.
-
@Sebastian-Roth said in Failed to Set Disk GUID (sgdisk -U) (restoreUUIDinformation):
4.7 MB is free with the old 100 MB sized initrd. I think that’s enough at the moment.
Not to be a jerk about that, but it appears not to be the case: https://forums.fogproject.org/post/123344 Looking at that error message FOS is trying to append to
/usr/share/fog/lib/EOFREG
. Do we know what circumstances that extra 4.7MB of space is being consumed not leaving room for this trivial text file? I don’t see that issue with my deployments, so there must be something unique about what the OP is deploying. I don’t feel that tossing out partimage would be of value because there are “still” people migrating from 0.3.x to the latest version of FOG.Sure this is something we can fix quickly by telling them to update the size setting in the web UI but it’s kind of annoying.
Why not with the next release just update this value in the web ui to 256MB and then up the init fs size to 200MB? Having a bit more room in the FOS fs would also give the possibility to slide in new (unsupported) applications during the fos postinit process. I think it would be fairly rare to find a system with less that 1GB of ram now days, even the arm systems have 1GB of ram. With a 200MB fs that is still consuming less than 1/4 of the available ram for the FOS fs.
Maybe the proper course of action if we want to keep the 100MB fs disk is to have the OP generate this error again in debug mode, when the “no disk space issue” is hit, crtl-c out and then run some cli commands to find out where the disk space is consumed. There has to be something doing this in the OPs environment.
-
@george1421 said in Failed to Set Disk GUID (sgdisk -U) (restoreUUIDinformation):
so there must be something unique about what the OP is deploying
Exactly and therefore there is not much value in extending the space as we never know what people do. As Tom mentioned this could be caused by a postinit script that might try to mound an external destination but fails and instead fills up the virtual root filesystem. It’s just a guess but unless we can replicate the issue we can’t say what’s causing the errors in the first place.
Why not with the next release just update this value in the web ui to 256MB and then up the init fs size to 200MB?
Sure that would be what we’d do in case we increase the inits. But as I said before we quite often have people who only need to update the inits and they will run into the kernel panic (which is not a horror scenario but still ugly if it hits you without prior notice) if we forget to tell them about it. But well, the whole discussion is a bit over the top. I should just go ahead and update the size value in the DB as soon as possible. The earlier we have that out there the less we can get in trouble with this.
Maybe the proper course of action if we want to keep the 100MB fs disk is to have the OP generate this error again in debug mode, when the “no disk space issue” is hit, crtl-c out and then run some cli commands to find out where the disk space is consumed. There has to be something doing this in the OPs environment.
RIght, @austinjt01 can you please do as George described?
-
@Sebastian-Roth Sorry i’m late to the party haven’t been on in a while. Yeah i’m going to test this out right now in debug mode.
I tried to burn the computer with our postinit folder empty thinking maybe that would change the problem but it didn’t.
@george1421 What exactly am i looking for in debug mode? Listing out the partitions and seeing what is consumed?
**UPDATE - I just commented out the preload script that’s supposed copy our files over after the burn is complete and it fixes our issue. While this is not ideal, I can at least make it work. I can add the few files that we want to copy over as part of my Audit mode image.
I believe i figured out the issue, it appears as thought the device names between convention HD’s and SSD’s are different. We had the script trying to copy these files into a device that doesn’t exists.
So I guess i need to add in a conditional to my BASH that checks if it’s a HD or SSD.
The device name in our current script is looking for sda3, not nvme0n1p3 on the SSD.
@george1421 I feel more confident now after discovering this, however if one of you think i’m not on the right path feel free to let me know and i’ll update you soon.
Thanks!
-
@austinjt01 said in Failed to Set Disk GUID (sgdisk -U) (restoreUUIDinformation):
What exactly am i looking for in debug mode? Listing out the partitions and seeing what is consumed?
When you hit that error where it says no disk space, hit crtl-c on the target computer’s keyboard.
Then from the FOS console key in
lsblk
df -h
Lets start with those. That will tell us the current disk space usages so we can start to find out what happened.
-
@austinjt01 said in Failed to Set Disk GUID (sgdisk -U) (restoreUUIDinformation):
**UPDATE - I just commented out the preload script that’s supposed copy our files over after the burn is complete and it fixes our issue. While this is not ideal, I can at least make it work. I can add the few files that we want to copy over as part of my Audit mode image.
I believe i figured out the issue, it appears as thought the device names between convention HD’s and SSD’s are different. We had the script trying to copy these files into a device that doesn’t exists.
So I guess i need to add in a conditional to my BASH that checks if it’s a HD or SSD.
The device name in our current script is looking for sda3, not nvme0n1p3 on the SSD.OK this update changing things a bit. As you can see from the
df -h
command your root partition has filled up.Now you said you are copying files around, are you copying drivers to the target image AND you are using one of the examples from the tutorials forum? If you are using one of my early scripts it does call out /dev/sda2 directly in the code. There are updated scripts that came out post the nvme disk era that will identify the correct windows partition to copy the scripts to. This is one such updated tutorial: https://forums.fogproject.org/topic/11126/using-fog-postinstall-scripts-for-windows-driver-injection-2017-ed
-
@george1421 I’m not copying drivers in any of my images. I switched to a per machine basis. The only thing I want to copy over is some test patterns, speaker timer software, and some royalty free music, (That’s what I mean by “Files”)
I’m going to look at this more tomorrow when i’m back in the office and i’ll let you know.
-
@austinjt01 ok, lets post your post install script where you are moving files to the target computer. Lets see how you are going about this. Remember to redact any private information. I’m only interested in seeing how you mount the target drive in your script.
-
#!/bin/sh echo " "; echo " * FOG file preload"; echo " "; mkdir /ntfs &>/dev/null # if /dev/sda3 exists, do this ntfs-3g -o force,rw /dev/sda3 /ntfs # else if /dev/nvme0n1p3 exists, do this ntfs-3g -o force,rw /dev/nvme0n1p3 /ntfs echo -n " * Copying preload files............................."; cp -r /images/preload/* /ntfs/ umount /ntfs echo "Done."; echo " "; echo " * File preload completed."; sleep 2;
This is what i started yesterday, train of thought being, just do a check for sda vs nvme?
-
@austinjt01 Ok that is a great starting point. I’ll again refer you to my tutorial: https://forums.fogproject.org/topic/11126/using-fog-postinstall-scripts-for-windows-driver-injection-2017-ed
Look at the fog.custominstall script. Start with that script and then in this section add your code:
echo "Done" debugPause . ${postdownpath}fog.copydrivers # . ${postdownpath}fog.updateunattend umount /ntfs
Remove the call to the fog.copydrivers and fog.updateunattended and insert your code. At this point in the code /ntfs has been mounted onto the windows (C: drive) partition, wherever it may be. The script takes into account nvme vs sata as well as partitions being in different order or locations.