ZSTD Compression
-
i’ve been talking with @Tom-Elliott about this, and we don’t think it would be worth the effort it would take to implement zstandard. the thing is, faster decompression is kind of irrelevant for FOG at the moment. what slows down deployments at the moment is transfer speed. the only way fog would get faster is if the file size was very significantly decreased. while the compression ratio is a better with zstandard, the difference isn’t very significant until you get to the higher compression levels, where processing time becomes a big issue.
there are other issues that deter us from adoption, but that’s the most significant reason. in fact, the single greatest reason TO adopt it would be because i think it’s really cool, lol.
-
@Junkhacker said in ZSTD Compression:
there are other issues that deter us from adoption, but that’s the most significant reason. in fact, the single greatest reason TO adopt it would be because i think it’s really cool, lol.
It would be REALLY COOL
-
THe problem isn’t the implementation or not.
Already, with PIGZ in use the issue (beyond multiple Unicast tasks) is most often slow down in writing the information to the disk. This is especially present when one is dealing with SSD.
It’s great that you can have “fast” decompression, but that only goes so far. You still have to write the data to disk. You have some buffer, but we’re already “decompressing” the data as fast as we can.
Where this might be very useful, however, would be uncompressed images, compressed as the data is requested, and then placed on disk so we have a live element of diminishing the amount of data to be passed across the network. Once it’s passed to the client, the only “hold” is on the speed at which data can be pushed from ram and written to disk. Even this, however, can only do so much.
Is it really worth implementing a new compression mechanism to maybe get a speed increase of possibly 1% during our imaging process?
-
I understand the speed would be significantly increased on upload tasks, but I don’t know how often people are uploading.
-
1 vCPU 1.6GHz - the system can no longer saturate gigabit over network shares…
Down from 110MB/s to 82MB/sCompression - Compressed size - Decompression time
zstd lvl1 - 7,940,779KB - 131 seconds
zstd lvl3 - 7,420,268KB - 134 seconds
zstd lvl11 - 6,967,155KB - 139 seconds
zstd lvl22 - 6,214,702KB - 157 secondspigz.exe --keep -6 a:\d1p2 - 7,535,149KB - 247 seconds
On my quad core VM PIGZ -6 only used 50MB/s decompression, zstd level 11 with a single core VM uses the same 50MB/s…
On the single core VM, PIGZ -6 is only 30 MB/s, the lowest zstd gets on level 22 is 39.5MB/sif we use the single core numbers, writing the whole image in 247 seconds (which isn’t too much faster than expected anyway) is around 66MB/s on disk, using zstd 11 writing it in 139 seconds is 117MB/s Most SATA disks should be able to do this… It will be a push for some 2.5" disks… (I checked numbers for 2.5" and 3.5" WD Greens)
-
Note that, since v1.1.3, there is a multithread mode available with
zstd
.It needs to be compiled with specific flags though.
On linux, it means typingmake zstdmt
For Windows, there are pre-compiled binaries in the release section : use thezstdmt
one.Since
pigz
is multi-threaded, it would be more fair to compare tozstdmt
, rather than single-threadedzstd
.The number of threads can be selected with command
-T#
, likexz
. -
The version of zstd i’ve been using is using all my threads
-
Maybe you just saw the note about 1vCPU. I only reduced to 1vCPU as the numbers with 4vCPU were all so close together.
Also might help to simulate a ‘low end’ machine…
-
For those of us not smart enough to fully understand, can someone give me a simple comparison – in time – between the proposed compression versus the current compression?
I work two IT jobs, one full-time and one part-time. Between the two, we order tens of thousands of computers each year. For January 2017, this was the most ordered machine from Dell, and most other orders were also in this same power range:
OptiPlex 5040 Small Form Factor
i5-6500 Processor (Quad Core, 6MB, 3.2GHz)
8 GB RAM
256 GB SSD
1 Gbps NICPercentage wise, approximately how much faster/slower would the proposed compression be for this machine when deploying an image to it?
-
@loosus456 it’s hard to say. Compression on upload would be phenomenal buy for deployment I don’t think there’d be a huge difference as even with our current stuff we’re mostly seeing speeds to write to disk.
-
@Tom-Elliott We do upload often (about twice a month), but if the upload isn’t much, much faster and the deployment isn’t significantly faster, it probably isn’t worth it.
I do wonder if HyperV upload through the legacy adapter would be faster, though. That takes literal hours right now.
-
@loosus456 Let’s say you upload 2 image’s a month, and you deploy 400 times a month, ultimately while upload would be “faster” you’re only increasing it during the upload process. As you still have your “setup” to create the image which is what’s taking the most of your time.
-
@Tom-Elliott Well, when it comes to uploading from HyperV with a legacy adapter to FOG, upload time is actually what takes most of the time. Image creation takes little time in comparison.
But yes, uploading from a physical machine is quite fast.
-
So I think what I want to say.
Seeing as this ZSTD, in what I can see here, only impacts upload speeds, is it worth the effort for a new standard and methodology of software to support when pigz/gzip is pretty much well standardized?
Consider this:
While capturing could be significantly improved, the deploy (which i imagine happens far more often that capture tasks) would not see a significant boost. Now if you have 10 unicast tasks with ZSTD that are able to deploy much more reliably and faster, this would be an improvement worth considering.
So if you all want to try this, build your init’s using the Wiki instructions and the information from the buildroot source already provided in every installation of FOG and run tests. Right now, as I’m seeing it, implementing this has been focused solely on compression after the image has been captured previously. Has anybody actually “compressed” the image during a real “capture” task?
Things to work with:
- integration into the init’s as a real utility for us to use.
- Do the same results happen on capture (maybe I missed this part).
- Do multiple unicast deploy’s deploy faster using this mechanism?
-
So for what it’s worth I’m giving a shot, I have not coding anything to use zstd, but I am running an installation/build test that will hopefully build the init’s with the necessary zstd binaries so others can test internally.
-
@Tom-Elliott said in ZSTD Compression:
Already, with PIGZ in use the issue (beyond multiple Unicast tasks) is most often slow down in writing the information to the disk.
As I was reading through this thread, this is exactly what I thought- that the biggest benefit would come with multiple simultaneous unicast deployments. Maybe instead of having Max Clients set at
2
I could do3
.And who knows, maybe I’ll squish the images enough to store 1 extra.
-
@Tom-Elliott Thanks for putting it into the init.
Would it be as simple as searching through the code for the commands for imaging and changing them to use zstd instead of pigz or would there be more complicated things involved due to the way the commands are generated?
Do you know if most people use multicast or just do multiple unicast for deployments? I have never got multicast to work fully and always end up with each client downloading on it’s own. I have usually had my server set to 4 clients at once except when i had 10GbE and 2Gbit links between MDF and IDF… On that machine i used 8 and with ZFS caching I had no problems with the disk IO of so many transfers.
If we can get improvements via increasing those numbers then it makes things a bit more worth the effort to speed up people’s deployments.
as for uploading… I also have to upload every month or so and with one of my clients i have a 2 hour time window to do all maintenance so uploading sometimes gets delayed as it can take a considerable amount of time.
The other benefit of reduced file size would also help, in my case, by reducing the sync time between sites over WAN.
As people’s machines become more powerful then we can scale with them instead of being held back by the lack of speed in PIGZ. 10GbE is coming down in price and SSD/NVMe/HDD are getting better all the time.
-
@VincentJ said in ZSTD Compression:
Do you know if most people use multicast or just do multiple unicast for deployments?
It’s a mix.
-
Alright, so I got bored.
I added all the “capabilities” as requested.
What I have found, so far, is the zstd doesn’t appear any better in compression even on -19 but with a SIGNIFICANT amount of overhead to wait. My image with pigz -6 was 2.8GB with 10 minute capture time. Same image deployed, and captured under zstd -19 was 2.6GB with 35 minute capture time. This was all done in NATIVE capture/deploy to ensure the inits were capable.
Mind you my test system was 1 CPU so multiple CPU’s may have helped in the capture time, but was it worth it? I mean, To be usable to multiple systems, a compression bearing of at least half the size would make it a suitable alternative. 200 MB not worth it.
-
What is in that image? 2.6GB compressed is very small. Does that image download in under a minute normally?
I have a base windows 10 + updates image i can also try. The one i used in my numbers previously had applications in it for a complete system. I will see if i can get that to compress down to something similar.
While my image is a lot bigger if i scale yours up to the size of mine; i am saving a lot more space.