I’ve seen the OP’s issue for several years on ~20 fog servers on various hardware platforms (virtual and bare metal), using both resized and non-resized images. I can confirm that this was still an issue on 1.5.8 but today I upgraded to 1.5.9 and it seems to be resolved. Old images still show the incorrect size, but recapturing them updates the image size on client to the correct value, which is approximately the minimum required hard drive capacity on the client when deploying the image.
Best posts made by benc
-
RE: Image Size on Client Incorrect
-
RE: Multicast randomly hangs around 70-90% on last partition
@sebastian-roth I will try putting different hard drives in the clients, and if that shows the same results I’ll probably just reinstall Win10 on one of the machines, capture that, and use that as my smaller test image.
-
RE: Multicast randomly hangs around 70-90% on last partition
@sebastian-roth The switches at each location are identical, and the configuration is fundamentally the same except that some locations have two switches stacked together to provide enough ports. One VLAN, same addressing scheme, same types of devices connected. Right now I’m really combing through the details of the configs, comparing the working locations to the ones that don’t. There could also be something with the fact that some locations have two switches and others have just one. That shouldn’t matter, but who knows. I’ll check back in with my findings.
Latest posts made by benc
-
Details of image_export.csv
Dell PowerEdge R730
Dual Xeon CPUs
32 GB RAM
8 x 2TB SAS drives in RAID5
Fog 1.5.9 is running fine under Windows Server 2019 HypervisorHello all, I’m trying to copy a fog image from Fog server A to Fog server B. I’ve done this before by copying the image files from A to B using FTP and using the import/export function in the Fog web GUI. I have the image files from server A via FTP, I don’t have the image listed in image_export.csv from server A because at some point this image was deleted from the GUI on server A.
I need to get this image somehow imported to the new Fog server. I’m also curious about what all the different columns mean in image_export.csv. I can clearly see things like image name, description, date, username, etc., but there are several more fields with either single digit values or several strings separated by colons. I was thinking about taking some time to reverse engineer it, but I was hoping someone out there would already know what all those columns mean.
Thanks,
Ben -
RE: Problem capturing image - " no space left on device"?
I’ve never run into this specific problem until today. Generally, issues with resizing can be fixed by increasing the CAPTURERESIZEPCT in Fog Settings > General Settings. I agree with what @Quazz said about certain kinds of fragmentations causing issues with the theoretical minimum partition size. I’ve found that the more used (fragmented?) a drive is, the more likely it is to fail during resize. I was able to get past the
no space left on device
andnumerical result out of range
errors by increasing the CAPTURERESIZEPCT to 15%. -
RE: capture a image over the fog boot menu
This would be a great feature to have, even if it were limited to only capturing to the image already assigned to the already registered host. It would be a good idea to have an extra confirmation message and invert the color scheme to make sure the user realizes what they’re about to do. However, deploying incorrectly can cause just as much damage as capturing incorrectly. FOG is a very sharp, double-edged tool.
Sometimes it’s a chore to go find another PC with network access so I can log in and set up the capture task or I’ll have to set up the capture task using the same PC before rebooting it. I can’t always carry my laptop with me and I don’t enjoy navigating any web interfaces on my phone.
-
RE: Add setting for amount of free space on resizable images
Cool! I didn’t know that option was there. Thanks.
-
Add setting for amount of free space on resizable images
Would it be possible to implement a setting in the Fog web interface to control how much the partitions are shrunk when capturing an image? I’m running into this problem more often where my resizable images don’t successfully resize before capture, I’m guessing because the script running parted is being a little too aggressive and not leaving enough free space. Parted usually throws an error like
No free mft record for $MFT: No space left on device Could not allocate new MFT record: No space left on device
I know from experience if I could set it to leave just a percent or two more free space it would probably resize successfully. It would be really nice to be able to set a target percent of free space on resizable images.
-
RE: Mounting and extracting files from an image
Ran this from the Ubuntu 18 box:
root@ubuntu18:~# zstdmt -dc </images/_Windows10Prox641909/d1p4.img | partclone.info -s - Partclone v0.3.11 http://partclone.org Showing info of image (-) File system: NTFS Device size: 9.9 GB = 2412369 Blocks Space in use: 9.6 GB = 2351625 Blocks Free Space: 248.8 MB = 60744 Blocks Block size: 4096 Byte image format: 0002 created on a: 64 bits platform with partclone: v0.3.13 bitmap mode: BIT checksum algo: NONE checksum size: n/a blocks/checksum: n/a reseed checksum: n/a
-
RE: Mounting and extracting files from an image
I bet my issue is partclone being an older version. I spun up a new Ubuntu 18 server during lunch and installed partclone and zstd and ran
zstdmt -dc </images/_Windows10Prox641909/d1p4.img | partclone.restore -C -O /d1p4.extracted.img -Nf 1 --restore_raw_file
and it extracted the image successfully. I suppose it’s time for me to get away from Ubuntu 16.
I’m a little hesitant to jump into learning how to compile programs from source. The last time I compiled anything was in QBasic in the late 90s.
@Quazz I really appreciate your help.
-
RE: Mounting and extracting files from an image
I’m finding that partclone error messages aren’t very helpful. Perhaps I should be using some debug or verbose option.
root@fog:~# zstdmt -dc </images/_Windows10Prox641909/d1p4.img | partclone.info -s - Partclone v0.2.86 http://partclone.org Display image information main, 153, not enough memory Partclone fail, please check /var/log/partclone.log !
-
RE: Mounting and extracting files from an image
I ran
apt update
andapt upgrade
but my partclone is still at 0.2.86. I also ranzstdmt -dc </images/_Windows10Prox641909/d1p4.img | partclone.restore -C -O /d1p4.extracted.img -Nf 1 --restore_raw_file
and this time it brought up a text-based graphical interface for partclone and it gave me the same error that I needed 820488013636592786 bytes of memory.
I did some digging on Google and now I’m wondering if I need to upgrade to Ubuntu 18 to be able to get the new partclone.
EDIT: My zstd is at version 1.3.1. I’m able to decompress the image file (3 GB) to another image file that is roughly the size of the data that should be on that partition (8 GB), so I’m thinking zstd is ok.
-
Mounting and extracting files from an image
Re: Mount and Extract files from images
TL;DR: +1 for this feature
Since I started using Fog around 2017 there have been several occasions where it would have been really handy to be able to decompress and mount a Fog image so I can grab a few files or folders from it. I tend to use Fog for backing up old machines just as much as deploying new ones. Right now I’ve got a dummy VM on a lab server set to boot from the network and I’ll deploy an image to it when I need to recover something. This works OK, and I usually end up just mounting the .vhdx to another test VM as a secondary drive so I can browse and copy what I need. It just takes a while when I have an image that is several hundred GB and I only need one file from it. I know even if this were done on the Fog server it would still have to decompress and extract the entire image, but it would be nice if this could be automated. It would eliminate a lot of image juggling and deploying and potential human error.
I’ve spent the last two days trying to figure out how to mount a Fog image in Ubuntu 16 Server. I can decompress it but partclone always gives some kind of error or tells me I need almost an exabyte of memory. Here is what I’ve tried:
sudo -i cd /images/_Windows10Prox641909 touch d1p4.extracted.img cat d1p4.img | zstd -dcf | partclone.restore -C -s - -O /d1p4.extracted.img --restore_raw_file
and here is what I get:
Partclone v0.2.86 http://partclone.org Starting to restore image (-) to device (d1p4.extracted.img) There is not enough free memory, partclone suggests you should have 820488013636592786 bytes memory Partclone fail, please check /var/log/partclone.log !
I’ve come across several examples of this being done as well as different ways to do the same thing, but none of them have worked for me. If I could figure out what partclone needs or figure out the correct syntax I could script the process and make it a bit less painful. I’ve also tried
partclone.ntfs
instead ofpartclone.restore
but it gives the same results. This Ubuntu box has 2.17 TB free space so there should be plenty of room to extract the entire image to a raw file.d1p4.img
is a 127 GB NTFS partition in this case.Thank you for your time and consideration.