ZSTD Compression
-
@Tom-Elliott said in ZSTD Compression:
Alright, so I got bored.
I added all the “capabilities” as requested.
What I have found, so far, is the zstd doesn’t appear any better in compression even on -19 but with a SIGNIFICANT amount of overhead to wait. My image with pigz -6 was 2.8GB with 10 minute capture time. Same image deployed, and captured under zstd -19 was 2.6GB with 35 minute capture time. This was all done in NATIVE capture/deploy to ensure the inits were capable.
Mind you my test system was 1 CPU so multiple CPU’s may have helped in the capture time, but was it worth it? I mean, To be usable to multiple systems, a compression bearing of at least half the size would make it a suitable alternative. 200 MB not worth it.
This was kind of my fear with ZSTD, it seems very slow on older CPUs (and even more so on single core ones)
-
@Tom-Elliott this is really unexpected. zstd tends to save a ton of time compared to gzip, on top of a better compression ratio, especially on large files (and anything > 100 MB certainly qualifies).
Except for cases where compression time doesn’t matter, zstd good use range seems to be levels 1 - 8. Default is 3, though I tend to prefer level 5 for my own use. It always compresses better than default gzip 6.
Your test image may contains large uncompressible sections (such as already-compressed files in the image, that no algorithm can compress further). This is typical of a freshly installed Windows OS. But is it representative of fog images ?
-
Remember, the speed was due to finding out after that I was using a non-multi-threading version of zstd. The speeds are much faster when paired with multiple CPU’s. The size, however, isn’t much different with or without multithreading.
-
@loosus456 said in ZSTD Compression:
We actually have about 1.5 TB worth of FOG images.
Then you are not most people.
-
@Tom-Elliott said in ZSTD Compression:
the pzstd library was more of use as it allowed realtime and multiple core compression.
I am having a problem building pzstd for 32 bit systems though :(.32 bit systems can use the zstd one. Can’t wait to see the compression/decompression speeds when you get pzstd working.
I can setup some tests for multiple unicast tasks simultaneously and I’m sure I’ll be able to note greater performance with the better compression.
-
@Wayne-Workman we have 3.7TB
-
@Junkhacker You’re for-sure not most people.
-
I want to say for people, using the zstdmt (vs pzstd as I can’t get it to build on 32 bit) the imaging is MUCH faster even at -19.
It’s compressing at about the same rate, more or less, but it’s much faster than the 35 minutes I originally had tested against. Using zstdmt on the same image with all cores in use took the 35 minute time and got it all done in 2 minutes 58 seconds.
-
@Tom-Elliott i was just performing some testing at zstandard -11 to compare to pigz -6
image size went from 16.4GB gz to 14.6GB zstdmt
capture time 14 minutes 15 seconds pigz to 11 minutes 4 seconds zstdmt
deploy time 3 minutes 33 seconds pigz to 2 minutes 38 seconds zstdmt -
geeze that is flippin fast… ok I’m sold.
-
i have not been able to get “ultra” levels (>=20) tasks to work without crashing. anyone else seeing the same?
-
Anyone tried like a 40 GB image? That’s in the range where I’d like to see a comparison in deployment time.
-
@loosus456 I would say that a 40GB disk image is uncommon. If you have a source, then I might suggest that you update to the latest RC release and test the results.
-
@george1421 40 GB is uncommon? Uh what? Are you guys deploying XP or something?
-
@Junkhacker I’m seeing exactly the same problem, and It feels like it’s writing the data and filling the RAM space too quickly. It’s doing so far faster than it can compress it, and overruns the memory space that partclone is using. (Just my theory).
-
@loosus456 My base images with all software had a disk usage of 48gb at it’s largest, but that was EVERYTHING I could put on. My “average” was about 25gb.
-
@Junkhacker “ultra” compression ratio (>=20) use a lot of memory.
So, either stick to “normal” compression levels (<= 19), or limit the number of threads, to ensure it doesn’t consume too much memory. -
@Tom-Elliott We have images with Autodesk and Solidworks near 100 GB.
-
@loosus456 I’m not disagreeing, I’m just saying what I imagine “most” people are working with (on average).
-
@Tom-Elliott I dunno. Most educational professionals I work with have had similar sizes.