Failed to Set Disk GUID (sgdisk -U) (restoreUUIDinformation)
-
@Sebastian-Roth Yeah you are right, I grabbed a different laptop to test on just now and everything is fine. Thanks for your help, I really appreciate it and your support.
Edit: The capture worked this time, but the deploy gives this error.
Picture was taken just now -
@george1421 @Junkhacker Is anyone able to to reproduce this issue?!?!?
-
@Sebastian-Roth said in Failed to Set Disk GUID (sgdisk -U) (restoreUUIDinformation):
@george1421 @Junkhacker Is anyone able to to reproduce this issue?!?!?
I can not duplicate the issue here, but that just means the circumstances (environment) is different.
I did see something that raised a red flag.
I have our images stored on our media server, if I path over to /mnt/media-images/<IMAGENAME>/d1.partitions
How are you doing this? You are not supposed to be able to reshare an nfs mounted share (technically you can, but fog is not configured to support it). You can’t do that in ms windows either.
Images are captured to /images/dev/<mac_address> directory (nfs share) and then “moved” to the /images/<image_name> directory using the ftp server built into the fog server. At the very least if /images/dev is on the fog server and /mnt/media-images/<image_name> is on the NFS share the ftp server will have to copy the entire image to the remote nfs server instead of just moving the file pointers as it would if the /images/dev and /images are on the same disk. If you want to store your images on an NFS nas but have the fog server manage the process there is another (unsupported) configuration you can use instead of resharing an nfs mount.
-
@george1421 Our storage node that we have setup is on ‘media’
‘/mnt/media-images’ is just an admin shortcut for us to navigate easier we aren’t using that for uploading or downloading. Upon capture the storage location on the FOG screen is listed as: ip address:/mnt/pool/it/images/dev/ Is it a possibility that the image is being captured into DEV but not being copied out of dev during deploy?The other thing I have to add to this, I have 7 different brands of laptops. All of the other brands work (Capture and Deploy) just fine. These new laptops we just got (30, Lenovo P52s) they are at least two years newer than any other device in our inventory. Do you think they need to be set up on UEFI instead of Legacy boot?? The only drastic difference between these machines is that they all have SSD’s which I didn’t think would make a difference?
Thanks
-
@austinjt01 is 10.228.255.10 your fog server?
-
@george1421 no, that’s the IP of our file server. .3 is the FOG server IP
-
@austinjt01 So your file server is a linux server with the fog code installed? Or is it a linux (like) nas device?
Do you have this working using the non-supported configuration where you create a second storage node and then set that node as the master? I’m still trying to understand your landscape.
-
@george1421
Our file server based on FreeNAS.I know that this isn’t supported by FOG, I guess I am just curious as to why everything will still work except for our new laptops.
‘fog’ is our master node, points to /mnt/media-images (backed by NFS) ‘media’ is an additional node, points to /mnt/pool/it/images (direct NAS). We set it up this way because the images are stored on a server with about 12 TB free currently.
-
@austinjt01 Well… I don’t have an answer why it works on everything except this laptop model.
On your freenas box you then will need the ftp service running with a user ID and password you listed in the storage node definition configured to allow the fog client to move the files. If the ftp service isn’t configured correctly your images will stay in /images/dev/<mac_address> (you will need to translate that path into your environment) and not be renamed to /images/<image_name>.
While its a little off-point and also not supported, here is a tutorial to make a synology NAS function as a FOG Storage Node: https://forums.fogproject.org/topic/9430/synology-nas-as-fog-storage-node
-
@george1421 @Tom-Elliott I am still wondering if the “No space left on device” was caused by another error and it’s not actually the rootfs being too small. I need to double check when booted into FOS later on but I think we don’t actually have a space issue. Should we still increase root fs size?
-
@Sebastian-Roth The “no space” is in regards to building the registry file we use to set the hostname on the machine. This happens in the /tmp space on FOS. There’s possibly something else causing the no space left (meaning it’s filling the initfs rather than the disk itself.) For example, postdownload trying to install drivers after mounting to /mnt (but the mount doesn’t actually happen)
So it may not be something we directly caused, but it is something that occurred and is related to the initfs being filled up.
-
@Tom-Elliott said in Failed to Set Disk GUID (sgdisk -U) (restoreUUIDinformation):
There’s possibly something else causing the no space left (meaning it’s filling the initfs rather than the disk itself.)
I am farily sure this was the case here. I just booted a client into FOS and see that we still have 4.7 MB of free space. Doesn’t sound like much but that’s heaps for a couple of text files we generate.
I was just wondering if adding more space will cause us more trouble than it helps. More often than not I forget to tell people to update FOG settings when using the new init. Sure we can update that value in the next release but I am wondering if it’s worth it. On the other hand updating it a fair bit now will prevent us from issues that might come at some point when we keep adding to the inits little by little.
-
@Sebastian-Roth said in Failed to Set Disk GUID (sgdisk -U) (restoreUUIDinformation):
I just booted a client into FOS and see that we still have 4.7 MB of free space
Didn’t we update this initfs to 256MB with this change to the build root config file? If I remember correctly the fog default was 100MB.
BR2_TARGET_ROOTFS_EXT2_SIZE="256M"
I’ll have to boot my last initrd file to see the free space, but 4MB sounds small. I might understand if its tempfs that is out of space. I think I remember that defaults to 4MB. That might be the space we are having issues with.
-
@george1421 said in Failed to Set Disk GUID (sgdisk -U) (restoreUUIDinformation):
Didn’t we update this initfs to 256MB with this change to the build root config file? If I remember correctly the fog default was 100MB.
Absolutely right but I have that feeling that this might cause us some trouble in the future when we tell people to manually update the inits to get some new fix and they’ll end in a kernel panic. Sure this is something we can fix quickly by telling them to update the size setting in the web UI but it’s kind of anoying. On the other hand we probably will need to push up the size at some point anyway - that or I need to work through the whole config and see what we can get rid of (e.g. toss out partimage support at some point in time) to free some space.
I’ll have to boot my last initrd file to see the free space, but 4MB sounds small.
4.7 MB is free with the old 100 MB sized initrd. I think that’s enough at the moment.
-
@Sebastian-Roth said in Failed to Set Disk GUID (sgdisk -U) (restoreUUIDinformation):
4.7 MB is free with the old 100 MB sized initrd. I think that’s enough at the moment.
Not to be a jerk about that, but it appears not to be the case: https://forums.fogproject.org/post/123344 Looking at that error message FOS is trying to append to
/usr/share/fog/lib/EOFREG
. Do we know what circumstances that extra 4.7MB of space is being consumed not leaving room for this trivial text file? I don’t see that issue with my deployments, so there must be something unique about what the OP is deploying. I don’t feel that tossing out partimage would be of value because there are “still” people migrating from 0.3.x to the latest version of FOG.Sure this is something we can fix quickly by telling them to update the size setting in the web UI but it’s kind of annoying.
Why not with the next release just update this value in the web ui to 256MB and then up the init fs size to 200MB? Having a bit more room in the FOS fs would also give the possibility to slide in new (unsupported) applications during the fos postinit process. I think it would be fairly rare to find a system with less that 1GB of ram now days, even the arm systems have 1GB of ram. With a 200MB fs that is still consuming less than 1/4 of the available ram for the FOS fs.
Maybe the proper course of action if we want to keep the 100MB fs disk is to have the OP generate this error again in debug mode, when the “no disk space issue” is hit, crtl-c out and then run some cli commands to find out where the disk space is consumed. There has to be something doing this in the OPs environment.
-
@george1421 said in Failed to Set Disk GUID (sgdisk -U) (restoreUUIDinformation):
so there must be something unique about what the OP is deploying
Exactly and therefore there is not much value in extending the space as we never know what people do. As Tom mentioned this could be caused by a postinit script that might try to mound an external destination but fails and instead fills up the virtual root filesystem. It’s just a guess but unless we can replicate the issue we can’t say what’s causing the errors in the first place.
Why not with the next release just update this value in the web ui to 256MB and then up the init fs size to 200MB?
Sure that would be what we’d do in case we increase the inits. But as I said before we quite often have people who only need to update the inits and they will run into the kernel panic (which is not a horror scenario but still ugly if it hits you without prior notice) if we forget to tell them about it. But well, the whole discussion is a bit over the top. I should just go ahead and update the size value in the DB as soon as possible. The earlier we have that out there the less we can get in trouble with this.
Maybe the proper course of action if we want to keep the 100MB fs disk is to have the OP generate this error again in debug mode, when the “no disk space issue” is hit, crtl-c out and then run some cli commands to find out where the disk space is consumed. There has to be something doing this in the OPs environment.
RIght, @austinjt01 can you please do as George described?
-
@Sebastian-Roth Sorry i’m late to the party haven’t been on in a while. Yeah i’m going to test this out right now in debug mode.
I tried to burn the computer with our postinit folder empty thinking maybe that would change the problem but it didn’t.
@george1421 What exactly am i looking for in debug mode? Listing out the partitions and seeing what is consumed?
**UPDATE - I just commented out the preload script that’s supposed copy our files over after the burn is complete and it fixes our issue. While this is not ideal, I can at least make it work. I can add the few files that we want to copy over as part of my Audit mode image.
I believe i figured out the issue, it appears as thought the device names between convention HD’s and SSD’s are different. We had the script trying to copy these files into a device that doesn’t exists.
So I guess i need to add in a conditional to my BASH that checks if it’s a HD or SSD.
The device name in our current script is looking for sda3, not nvme0n1p3 on the SSD.
@george1421 I feel more confident now after discovering this, however if one of you think i’m not on the right path feel free to let me know and i’ll update you soon.
Thanks!
-
@austinjt01 said in Failed to Set Disk GUID (sgdisk -U) (restoreUUIDinformation):
What exactly am i looking for in debug mode? Listing out the partitions and seeing what is consumed?
When you hit that error where it says no disk space, hit crtl-c on the target computer’s keyboard.
Then from the FOS console key in
lsblk
df -h
Lets start with those. That will tell us the current disk space usages so we can start to find out what happened.
-
@austinjt01 said in Failed to Set Disk GUID (sgdisk -U) (restoreUUIDinformation):
**UPDATE - I just commented out the preload script that’s supposed copy our files over after the burn is complete and it fixes our issue. While this is not ideal, I can at least make it work. I can add the few files that we want to copy over as part of my Audit mode image.
I believe i figured out the issue, it appears as thought the device names between convention HD’s and SSD’s are different. We had the script trying to copy these files into a device that doesn’t exists.
So I guess i need to add in a conditional to my BASH that checks if it’s a HD or SSD.
The device name in our current script is looking for sda3, not nvme0n1p3 on the SSD.OK this update changing things a bit. As you can see from the
df -h
command your root partition has filled up.Now you said you are copying files around, are you copying drivers to the target image AND you are using one of the examples from the tutorials forum? If you are using one of my early scripts it does call out /dev/sda2 directly in the code. There are updated scripts that came out post the nvme disk era that will identify the correct windows partition to copy the scripts to. This is one such updated tutorial: https://forums.fogproject.org/topic/11126/using-fog-postinstall-scripts-for-windows-driver-injection-2017-ed
-
@george1421 I’m not copying drivers in any of my images. I switched to a per machine basis. The only thing I want to copy over is some test patterns, speaker timer software, and some royalty free music, (That’s what I mean by “Files”)
I’m going to look at this more tomorrow when i’m back in the office and i’ll let you know.