ZSTD Compression
-
I want to say for people, using the zstdmt (vs pzstd as I can’t get it to build on 32 bit) the imaging is MUCH faster even at -19.
It’s compressing at about the same rate, more or less, but it’s much faster than the 35 minutes I originally had tested against. Using zstdmt on the same image with all cores in use took the 35 minute time and got it all done in 2 minutes 58 seconds.
-
@Tom-Elliott i was just performing some testing at zstandard -11 to compare to pigz -6
image size went from 16.4GB gz to 14.6GB zstdmt
capture time 14 minutes 15 seconds pigz to 11 minutes 4 seconds zstdmt
deploy time 3 minutes 33 seconds pigz to 2 minutes 38 seconds zstdmt -
geeze that is flippin fast… ok I’m sold.
-
i have not been able to get “ultra” levels (>=20) tasks to work without crashing. anyone else seeing the same?
-
Anyone tried like a 40 GB image? That’s in the range where I’d like to see a comparison in deployment time.
-
@loosus456 I would say that a 40GB disk image is uncommon. If you have a source, then I might suggest that you update to the latest RC release and test the results.
-
@george1421 40 GB is uncommon? Uh what? Are you guys deploying XP or something?
-
@Junkhacker I’m seeing exactly the same problem, and It feels like it’s writing the data and filling the RAM space too quickly. It’s doing so far faster than it can compress it, and overruns the memory space that partclone is using. (Just my theory).
-
@loosus456 My base images with all software had a disk usage of 48gb at it’s largest, but that was EVERYTHING I could put on. My “average” was about 25gb.
-
@Junkhacker “ultra” compression ratio (>=20) use a lot of memory.
So, either stick to “normal” compression levels (<= 19), or limit the number of threads, to ensure it doesn’t consume too much memory. -
@Tom-Elliott We have images with Autodesk and Solidworks near 100 GB.
-
@loosus456 I’m not disagreeing, I’m just saying what I imagine “most” people are working with (on average).
-
@Tom-Elliott I dunno. Most educational professionals I work with have had similar sizes.
-
@loosus456 said in ZSTD Compression:
40 GB is uncommon?
My Win10 images were around 20GB. Linux images are even smaller, 3 or 4 GB - and that’s all I deploy anymore, just all Linux.
Understand when I build an image - it’s lean. No flab. I literally throw out the manufacturer’s bloated crappy flabby unoptimized image and build my own from scratch. Not just for size & performance reasons but for security reasons too. Computer manufacturers don’t vet the bloatware they sign contracts for - to install into their images - many times these crappy bloated pieces of software are found to have vulnerabilities. But I make images lean. In Windows that means a vanilla installation & using device manager to install ONLY the drivers and no extra flab/bloatware crap. When I put MS Office on the image, I only installed the pieces that people used, not everything. The fat 10GB recovery partition gets thrown out, don’t need that. I also rebuilt all my images every summer - because if all you do is take an old image and patch it up, it gets bloated & slow.
-
Tom-Elliott said in ZSTD Compression:
@loosus456 My base images with all software had a disk usage of 48gb at it’s largest, but that was EVERYTHING I could put on. My “average” was about 25gb.
I use a single win7 image at my place (nearly all software etc.), it’s just under 30gb. Will make some time to do some tests
-
@Junkhacker The majority of my images are ~55 GiB compressed, 85-115 GiB deployed. And yah, Autodesk and Adobe products suck it up real good.
-
@loosus456 the image i was testing with here was 33.87 GB “on client”
-
So… Deploy with old fog, capture with new fog on a different image set to zstd 11
I noticed on the upload, the screen still says:
Starting to clone device (/dev/sda2) to image (/tmp/pigz1)The image is Multiple Partition Single Disk (non re-sizeable)
On my old image it has two files… but the new one has three…
I am pulling the image off the NAS now to unpack it and test it, but thought i would pitch in on what i’ve seen so far.
svn revision 6066
-
@VincentJ “/tmp/pigz1” is just the name of the fifo that the data is being piped into to be sent to compression. maybe we should rename it for purely aesthetic reasons, but that’s working as it should. was the previous image also Multiple Partition Single Disk (non re-sizeable)?
-
@Junkhacker Yes it was. All of my images are.