The future of partclone and therefore FOG as it is
-
@Junkhacker said in The future of partclone and therefore FOG as it is:
all you have to do is remove the --ignore_crc flag
OK then I have to ask the silliest question possible, why was it there in the first place? One might think you would want crc checks in place.
-
@Junkhacker said in The future of partclone and therefore FOG as it is:
there’s a couple things i think we should set as the new defaults for the new version of partclone when capturing
Well I’ll then defer to your expertise here. What should we change?
If clonezilla is using 0.3.10+ is there any information we can glean from their project?
-
@george1421 it’s kinda always been there, but i would assume it’s because we could squeeze a little more performance out of fog by not calculating checksums. also, since the image wouldn’t decompress properly anyway if there was corruption the checksum would only catch anything that went wrong between parclone capturing and it being piped into the compressor. which isn’t likely to have any problems, and if there is there’s a lot more going on than a checksum is going to help with.
-
@george1421 well, like i alluded to with my last comment, adding checksums doesn’t make sense since we pipe it right into a compressor that will add it’s own checksums.
… actually, you can defer to my post here https://forums.fogproject.org/topic/12750/file-format-and-compression-option-request/ for my arguments for change, but the short of it is:disable checksums with the flag -a0
the rest of my changes actually have to do with the settings we use for compression, come to think of it.
-
since you’re updating partclone, you guys plan on updating zstd too, right?
-
@Junkhacker I have to look into what buildroot is pulling for zstd. But in general what ever buildroot is configured to pull (which is usually the latest) is what gets bundled into FOS.
-
@Junkhacker So far we had version 1.3.3 and going to go to 1.3.5. As George said, just using the versions bundled with Buildroot.
-
@george1421 awesome. if the changes i want to be made make it in to the final builds, it will allow my university to save terabytes with deduplication.
-
@Junkhacker I just confirmed that 1.3.5 is what is being built by buildroot.
-
@george1421 a new feature i want to utilize was added in 1.3.8. 1.4.0 is out now.
-
@Junkhacker said:
if the changes i want to be made make it in to the final builds,
Can you please post more details on which changes you’d like to have. I read about option
-a0
and in the other topic it’s-aX0
and--rsyncable
. Where do I find information about these options? -
@Junkhacker said in The future of partclone and therefore FOG as it is:
a new feature i want to utilize was added in 1.3.8.
We can patch Buildroot to use the newer version. That shouldn’t be much trouble I suppose.
-
@Junkhacker As long as its stable <grin> we can manually update the version in build root to pull a later or earlier release. We’ll just need to watch if any build requirements have changed during compiling. (Hint: its not a big deal to change). <as spoken as a non-developer but someone who has worked with buidroot quite a bit lately>
-
@Sebastian-Roth you’re right. it’s
-aX0
, that was a typothe
--rsyncable
is for zstd which should also be given the option-B128
to specify the window block size that’s just about ideal for deduplication.
another option for zstd, which is functionally incompatible with--rsyncable
is--long
which allows you to dedicate more memory to the window of compared data in zstd, this can substantially improve the compression. the--long
option however will also require the--long
option to be specified during decompression. -
Looking at the fog bash scripts, I’m seeing partclone called here:
fog.upload: partclone.imager -c -s "$hd" -O /tmp/pigz1 -N -f 1
and then in funcs.sh
funcs.sh: zstdmt -dc </tmp/pigz1 | partclone.restore -n "Storage Location $storage, Image name $img" --ignore_crc -O ${target} -Nf 1 funcs.sh: # Uncompressed partclone funcs.sh: cat </tmp/pigz1 | partclone.restore -n "Storage Location $storage, Image name $img" --ignore_crc -O ${target} -Nf 1 funcs.sh: #[[ ! $? -eq 0 ]] && zstdmt -dc </tmp/pigz1 | partclone.restore --ignore_crc -O ${target} -N -f 1 || true funcs.sh: # GZIP Compressed partclone funcs.sh: #zstdmt -dc </tmp/pigz1 | partclone.restore -n "Storage Location $storage, Image name $img" --ignore_crc -O ${target} -N -f 1 funcs.sh: pigz -dc </tmp/pigz1 | partclone.restore -n "Storage Location $storage, Image name $img" --ignore_crc -O ${target} -N -f 1 funcs.sh: #[[ ! $? -eq 0 ]] && cat </tmp/pigz1 | partclone.restore --ignore_crc -O ${target} -N -f 1 || true funcs.sh: [[ ! $exitcode -eq 0 ]] && handleWarning "Image failed to restore and exited with exit code $exitcode (${FUNCNAME[0]})\n Info: $(cat /tmp/partclone.log)\n Args Passed: $*" funcs.sh: echo " * Using partclone.$fstype" funcs.sh: partclone.$fstype -n "Storage Location $storage, Image name $img" -cs $part -O $fifoname -Nf 1
I’ve updated the zstd package in buidroot to 1.3.8 for the next build run to see if it compiles OK.
-
OK I ran into a little roadblock here and then an observation.
- FOS uses pigz to compress/decompress images it was last updated in Dec 2017. I have not confirmed this yet, but I’m going to suspect that it doesn’t support the
--rsyncable
switch. So now I have to raise the question since the pigz project hasn’t had any updates since then. Do we continue with pigz or switch back to gzip? Has the advantages of pigz in 2017 been superseded by gzip 1.9 (buildroot) or 1.10 (current)?
[Edit] Well I should have looked at the man page for pigz before posting. There is a
-R rsyncable
command line option. I’ll have an updated init.xz created in a few minutes with the options selected.- I noticed that FOS can split the image file into 200MB blocks. I question the logic of needed this now that image files are in the multiple GiB in size. I might understand 2GB blocks because of the 32 bit limitations but 200MB? This is not something we need to address now, but I just wonder if its a legacy setting that no one ever uses.
- FOS uses pigz to compress/decompress images it was last updated in Dec 2017. I have not confirmed this yet, but I’m going to suspect that it doesn’t support the
-
@george1421 The 200 MB split option isn’t really legacy, in fact quite the contrary.
The idea of the smaller MB options was so you could fit the files on a series of CD’s if needed. This isn’t necessary for most people though, but I could see it being useful in some cases.
-
@george1421 pigz still outperforms gzip. gzip still doesn’t support multi-threading, and i don’t think they have any plans to implement it. also pigz uses a slightly different formula for it’s rsyncable implementation that i have tested to be slightly better at creating chunk data that dedups.
blame me for the 200MB split code. it was an experiment that got pushed into the mainline code, but the idea was to make it easier for people running FOG to make backups to CD/DVD/external hard drive.
-
@george1421 said in The future of partclone and therefore FOG as it is:
–ignore_crc
Still no joy restoring a previously captured image using partclone 0.3.12. Just to be sure I changed the target system’s inits back to the FOG default init and the image deployed correctly on the target computer.
Next step is to rebuild a reference image and capture it with partclone 0.3.12 to see if its a legacy vs current partclone issue.
For reference here is a link to my test init: https://drive.google.com/open?id=1L3CxtRXn4cwLksu-41OcGyZ_yd5qlK1h
-
@george1421 line 2052 of funcs.sh needs the
-aX0
flag set, but i don’t see anything else wrong yet. i’ll test it as soon as i can.