The future of partclone and therefore FOG as it is

  • Moderator


    Ok first of all I updated the init on the google drive (the end result here will be to provide the developers a diff file between the changes and the original).

    So your deployment worked and mine did not. So we need to find out what’s different.

    In my case it was a uefi image deployed to a Dell 6430 in uefi mode. Using the original inits this system deployed ok. With the partclone v0.3 it did not.

    The deployed image was Win10 Ent 1803 partclone gzip compressed. OS=Win10 (9) single disk resizable all partitions, compression 6.

    I’ve got tied up today so I haven’t had a chance to build a new golden image to attempt to capture and deploy with the same (upgrded) partclone.

  • Developer

    if i use the init @Sebastian-Roth supplied here and edit the to remove the --ignore_crc flags, i have no problems when deploying any images. just fyi

  • Developer

    i tried the init you supplied, and i got a "failed to set disk guid (sgdisk -U) (restoreUUIDInformation)

    though, it did boot fine

  • Developer

    @george1421 line 2052 of needs the -aX0 flag set, but i don’t see anything else wrong yet. i’ll test it as soon as i can.

  • Moderator

    @george1421 said in The future of partclone and therefore FOG as it is:


    Still no joy restoring a previously captured image using partclone 0.3.12. Just to be sure I changed the target system’s inits back to the FOG default init and the image deployed correctly on the target computer.

    Next step is to rebuild a reference image and capture it with partclone 0.3.12 to see if its a legacy vs current partclone issue.

    For reference here is a link to my test init:

  • Developer

    @george1421 pigz still outperforms gzip. gzip still doesn’t support multi-threading, and i don’t think they have any plans to implement it. also pigz uses a slightly different formula for it’s rsyncable implementation that i have tested to be slightly better at creating chunk data that dedups.

    blame me for the 200MB split code. it was an experiment that got pushed into the mainline code, but the idea was to make it easier for people running FOG to make backups to CD/DVD/external hard drive.

  • Senior Developer

    @george1421 The 200 MB split option isn’t really legacy, in fact quite the contrary.

    The idea of the smaller MB options was so you could fit the files on a series of CD’s if needed. This isn’t necessary for most people though, but I could see it being useful in some cases.

  • Moderator

    OK I ran into a little roadblock here and then an observation.

    1. FOS uses pigz to compress/decompress images it was last updated in Dec 2017. I have not confirmed this yet, but I’m going to suspect that it doesn’t support the --rsyncable switch. So now I have to raise the question since the pigz project hasn’t had any updates since then. Do we continue with pigz or switch back to gzip? Has the advantages of pigz in 2017 been superseded by gzip 1.9 (buildroot) or 1.10 (current)?

    [Edit] Well I should have looked at the man page for pigz before posting. There is a -R rsyncable command line option. I’ll have an updated init.xz created in a few minutes with the options selected.

    1. I noticed that FOS can split the image file into 200MB blocks. I question the logic of needed this now that image files are in the multiple GiB in size. I might understand 2GB blocks because of the 32 bit limitations but 200MB? This is not something we need to address now, but I just wonder if its a legacy setting that no one ever uses.
  • Moderator

    Looking at the fog bash scripts, I’m seeing partclone called here:

    fog.upload:            partclone.imager -c -s "$hd" -O /tmp/pigz1 -N -f 1

    and then in            zstdmt -dc </tmp/pigz1 | partclone.restore -n "Storage Location $storage, Image name $img" --ignore_crc -O ${target} -Nf 1            # Uncompressed partclone            cat </tmp/pigz1 | partclone.restore -n "Storage Location $storage, Image name $img" --ignore_crc -O ${target} -Nf 1            #[[ ! $? -eq 0 ]] && zstdmt -dc </tmp/pigz1 | partclone.restore --ignore_crc -O ${target} -N -f 1 || true            # GZIP Compressed partclone            #zstdmt -dc </tmp/pigz1 | partclone.restore -n "Storage Location $storage, Image name $img" --ignore_crc -O ${target} -N -f 1            pigz -dc </tmp/pigz1 | partclone.restore -n "Storage Location $storage, Image name $img" --ignore_crc -O ${target} -N -f 1            #[[ ! $? -eq 0 ]] && cat </tmp/pigz1 | partclone.restore --ignore_crc -O ${target} -N -f 1 || true    [[ ! $exitcode -eq 0 ]] && handleWarning "Image failed to restore and exited with exit code $exitcode (${FUNCNAME[0]})\n   Info: $(cat /tmp/partclone.log)\n   Args Passed: $*"                    echo " * Using partclone.$fstype"                    partclone.$fstype -n "Storage Location $storage, Image name $img" -cs $part -O $fifoname -Nf 1

    I’ve updated the zstd package in buidroot to 1.3.8 for the next build run to see if it compiles OK.

  • Developer

    @Sebastian-Roth you’re right. it’s -aX0, that was a typo

    the --rsyncable is for zstd which should also be given the option -B128 to specify the window block size that’s just about ideal for deduplication.
    another option for zstd, which is functionally incompatible with --rsyncable is --long which allows you to dedicate more memory to the window of compared data in zstd, this can substantially improve the compression. the --long option however will also require the --long option to be specified during decompression.

  • Moderator

    @Junkhacker As long as its stable <grin> we can manually update the version in build root to pull a later or earlier release. We’ll just need to watch if any build requirements have changed during compiling. (Hint: its not a big deal to change). <as spoken as a non-developer but someone who has worked with buidroot quite a bit lately>

  • Senior Developer

    @Junkhacker said in The future of partclone and therefore FOG as it is:

    a new feature i want to utilize was added in 1.3.8.

    We can patch Buildroot to use the newer version. That shouldn’t be much trouble I suppose.

  • Senior Developer

    @Junkhacker said:

    if the changes i want to be made make it in to the final builds,

    Can you please post more details on which changes you’d like to have. I read about option -a0 and in the other topic it’s -aX0 and --rsyncable. Where do I find information about these options?

  • Developer

    @george1421 a new feature i want to utilize was added in 1.3.8. 1.4.0 is out now.

  • Moderator

    @Junkhacker I just confirmed that 1.3.5 is what is being built by buildroot.

  • Developer

    @george1421 awesome. if the changes i want to be made make it in to the final builds, it will allow my university to save terabytes with deduplication.

  • Senior Developer

    @Junkhacker So far we had version 1.3.3 and going to go to 1.3.5. As George said, just using the versions bundled with Buildroot.

  • Moderator

    @Junkhacker I have to look into what buildroot is pulling for zstd. But in general what ever buildroot is configured to pull (which is usually the latest) is what gets bundled into FOS.

  • Developer

    since you’re updating partclone, you guys plan on updating zstd too, right?

  • Developer

    @george1421 well, like i alluded to with my last comment, adding checksums doesn’t make sense since we pipe it right into a compressor that will add it’s own checksums.
    … actually, you can defer to my post here for my arguments for change, but the short of it is:

    disable checksums with the flag -a0

    the rest of my changes actually have to do with the settings we use for compression, come to think of it.