The future of partclone and therefore FOG as it is
-
@Junkhacker said in The future of partclone and therefore FOG as it is:
a new feature i want to utilize was added in 1.3.8.
We can patch Buildroot to use the newer version. That shouldn’t be much trouble I suppose.
-
@Junkhacker As long as its stable <grin> we can manually update the version in build root to pull a later or earlier release. We’ll just need to watch if any build requirements have changed during compiling. (Hint: its not a big deal to change). <as spoken as a non-developer but someone who has worked with buidroot quite a bit lately>
-
@Sebastian-Roth you’re right. it’s
-aX0
, that was a typothe
--rsyncable
is for zstd which should also be given the option-B128
to specify the window block size that’s just about ideal for deduplication.
another option for zstd, which is functionally incompatible with--rsyncable
is--long
which allows you to dedicate more memory to the window of compared data in zstd, this can substantially improve the compression. the--long
option however will also require the--long
option to be specified during decompression. -
Looking at the fog bash scripts, I’m seeing partclone called here:
fog.upload: partclone.imager -c -s "$hd" -O /tmp/pigz1 -N -f 1
and then in funcs.sh
funcs.sh: zstdmt -dc </tmp/pigz1 | partclone.restore -n "Storage Location $storage, Image name $img" --ignore_crc -O ${target} -Nf 1 funcs.sh: # Uncompressed partclone funcs.sh: cat </tmp/pigz1 | partclone.restore -n "Storage Location $storage, Image name $img" --ignore_crc -O ${target} -Nf 1 funcs.sh: #[[ ! $? -eq 0 ]] && zstdmt -dc </tmp/pigz1 | partclone.restore --ignore_crc -O ${target} -N -f 1 || true funcs.sh: # GZIP Compressed partclone funcs.sh: #zstdmt -dc </tmp/pigz1 | partclone.restore -n "Storage Location $storage, Image name $img" --ignore_crc -O ${target} -N -f 1 funcs.sh: pigz -dc </tmp/pigz1 | partclone.restore -n "Storage Location $storage, Image name $img" --ignore_crc -O ${target} -N -f 1 funcs.sh: #[[ ! $? -eq 0 ]] && cat </tmp/pigz1 | partclone.restore --ignore_crc -O ${target} -N -f 1 || true funcs.sh: [[ ! $exitcode -eq 0 ]] && handleWarning "Image failed to restore and exited with exit code $exitcode (${FUNCNAME[0]})\n Info: $(cat /tmp/partclone.log)\n Args Passed: $*" funcs.sh: echo " * Using partclone.$fstype" funcs.sh: partclone.$fstype -n "Storage Location $storage, Image name $img" -cs $part -O $fifoname -Nf 1
I’ve updated the zstd package in buidroot to 1.3.8 for the next build run to see if it compiles OK.
-
OK I ran into a little roadblock here and then an observation.
- FOS uses pigz to compress/decompress images it was last updated in Dec 2017. I have not confirmed this yet, but I’m going to suspect that it doesn’t support the
--rsyncable
switch. So now I have to raise the question since the pigz project hasn’t had any updates since then. Do we continue with pigz or switch back to gzip? Has the advantages of pigz in 2017 been superseded by gzip 1.9 (buildroot) or 1.10 (current)?
[Edit] Well I should have looked at the man page for pigz before posting. There is a
-R rsyncable
command line option. I’ll have an updated init.xz created in a few minutes with the options selected.- I noticed that FOS can split the image file into 200MB blocks. I question the logic of needed this now that image files are in the multiple GiB in size. I might understand 2GB blocks because of the 32 bit limitations but 200MB? This is not something we need to address now, but I just wonder if its a legacy setting that no one ever uses.
- FOS uses pigz to compress/decompress images it was last updated in Dec 2017. I have not confirmed this yet, but I’m going to suspect that it doesn’t support the
-
@george1421 The 200 MB split option isn’t really legacy, in fact quite the contrary.
The idea of the smaller MB options was so you could fit the files on a series of CD’s if needed. This isn’t necessary for most people though, but I could see it being useful in some cases.
-
@george1421 pigz still outperforms gzip. gzip still doesn’t support multi-threading, and i don’t think they have any plans to implement it. also pigz uses a slightly different formula for it’s rsyncable implementation that i have tested to be slightly better at creating chunk data that dedups.
blame me for the 200MB split code. it was an experiment that got pushed into the mainline code, but the idea was to make it easier for people running FOG to make backups to CD/DVD/external hard drive.
-
@george1421 said in The future of partclone and therefore FOG as it is:
–ignore_crc
Still no joy restoring a previously captured image using partclone 0.3.12. Just to be sure I changed the target system’s inits back to the FOG default init and the image deployed correctly on the target computer.
Next step is to rebuild a reference image and capture it with partclone 0.3.12 to see if its a legacy vs current partclone issue.
For reference here is a link to my test init: https://drive.google.com/open?id=1L3CxtRXn4cwLksu-41OcGyZ_yd5qlK1h
-
@george1421 line 2052 of funcs.sh needs the
-aX0
flag set, but i don’t see anything else wrong yet. i’ll test it as soon as i can. -
i tried the init you supplied, and i got a "failed to set disk guid (sgdisk -U) (restoreUUIDInformation)
though, it did boot fine
-
if i use the init @Sebastian-Roth supplied here https://forums.fogproject.org/post/119056 and edit the funcs.sh to remove the --ignore_crc flags, i have no problems when deploying any images. just fyi
-
Ok first of all I updated the init on the google drive (the end result here will be to provide the developers a diff file between the changes and the original).
So your deployment worked and mine did not. So we need to find out what’s different.
In my case it was a uefi image deployed to a Dell 6430 in uefi mode. Using the original inits this system deployed ok. With the partclone v0.3 it did not.
The deployed image was Win10 Ent 1803 partclone gzip compressed. OS=Win10 (9) single disk resizable all partitions, compression 6.
I’ve got tied up today so I haven’t had a chance to build a new golden image to attempt to capture and deploy with the same (upgrded) partclone.
-
@george1421 mine was a bios image deployed in legacy mode. the image was a Win 7 Ent zstd compressed resizable, compression 11.
i also tested a windows 10 image that was otherwise the same.with uefi, the UUID information not applying error i got would be important. did yours throw a visible error? do your partitions have the data intact on them, or are they corrupt? (i had problems in testing with corrupted deployment when using the --ignore_crc flag is why i ask.)
-
@Junkhacker They were corrupt. I could see the partition types as uefi boot and such, but could not mount them.
I think I have something else going on here. I just tried to recapture a uefi golden image and I have an error about getPartitionLabel: command not found on line 486, then partclone exited with error code 139. So I’m going to have to do a bit more research, maybe uefi mode is causing something if bios mode worked for you.
I may need to start with something simple like win10 bios mode and see where it starts to fall down.
-
@george1421 is it possible that you tried to deploy a legacy image with partclone 0.3 and the --ignore_crc flag set, then captured it again, or something like that? or maybe captured without checksums with 0.3 and redeployed it with --ignore_crc? i know for certain that the second of those two will result in corrupt partition without reporting any problem. (it tries to skip the blocks in the image file where the checksum would be, but there it skips actual data because there’s no checksum)
-
downloaded the init_p3 file and tried again. got the UUID error again, but it boots fine.
-
@Junkhacker What I’m currently testing is an legacy image captured with the original inits and then deploying with the test inits from this thread. In that case the test inits (partclone 0.3.12) should have all of the
--ignore_crc
switches removed.I’m doing this to confirm we won’t have issues with previously captured images and the new inits.
Well after deploying a simple win7 bios mode its also failed to this same test computer. So at this time I need to start back from a known good state and change only one thing at a time.
-
@george1421 that’s the same as i’m doing. i just thought maybe in your testing an upload might have taken place with 0.3.12 by accident. i’ve been deploying images using my dev server, some of which i copied over from my production server. i haven’t seen any problems other than the UUID thing i posted.
-
@Junkhacker Well this is all good information. In your environment (except for the UUID issue) its deploying correctly. The other thing (thinking about all of the variables here) is that I’m running 1.5.4 on my production server. I’m pretty sure that 1.5.4 inits were created with a previous release of buildroot than what I’m building against. Its possible that buildroot updated applications too, but if everything appears to work in your environment this I can take one step to rule out buildroot differences causing this issue. As well it could be this old laptop that has issues. I’ll keep working on eliminating non-issues.
Thank you for your feedback.
-
@george1421 Great stuff you are working on this full on!! I try to follow up on what you post and test. Though I don’t have the time to engage in this at the moment.
It’s interesting you and Junkhacker seem to get different results and it would be very important to figure out why. I can’t imagine it being a UEFI vs legacy difference issue as partclone should just be deploying the actual contents of the partitions no matter what the boot type or partition layout looks like. But what do I know…
@george1421 said:
I just tried to recapture a uefi golden image and I have an error about getPartitionLabel: command not found on line 486, then partclone exited with error code 139.
Not sure which version you used because we removed getPartitionLabel stuff recently. Please make sure you use the very latest version (
master
branch).@Junkhacker The UUID error you see stems from an issue we had in the inits when I created the one you use for testing. This was fixed some weeks ago.
i tried the init you supplied, and i got a "failed to set disk guid (sgdisk -U) (restoreUUIDInformation)
Can you please post a picture of that?