ZSTD Compression

  • Developer

    @VincentJ default compression level is 6. testing found that it had the best compression/performance ratio for us. i don’t think many people change it.

  • Moderator

    I am trying to setup something to test speed with local disks and a ramdisk…

    I have a copy of pigz for windows and the zstd version of 7zip…

    I’m setting up a quad core VM on one of my hosts with over 32GB RAM. CPU in the host is dual E5-2603V3 and storage is HDD Raid 1.

    Which compression levels in pigz do you want me to try and test?

    I am not entirely sure this gets round all of the problems speed wise which i could have that would make results not ideal… but it’s the best i can do.

  • Developer

    @VincentJ if you’re comparing for performance, you need to compare zstd with pigz. pigz is what we use, and it is multithreaded. it is significantly faster than regular gzip while maintaining compatibility with it’s compression algorithm.

  • Moderator


    They did a comparison between the gzip cli and the zstd cli and compression with zstd was around 5 times faster and decompression was around 3.5 times faster…

    zstd is tunable, which is why i included four levels, which all beat gzip9 for compression ratio. If using zstd, we can deliver the image faster (because it’s smaller) and decompress the data faster once it arrives at the client… where is the loss?

    I don’t have a testing infrastructure i can use at the moment that would yield verifiable speed results. My storage array is not built for IOPS and there would be live VMs also consuming bandwidth on the array and the network which would affect the result. I would also be reading an image off a drive while trying to write the same image to the same drive within a VM.

  • Moderator

    @VincentJ We have to remember here that the target client does all of the heavy lifting during image capture/deploy. (I’m only speaking in general terms here) You can have the greatest compress / decompression ratio, but if the penalty is time then what you are saving in over the wire time is lost because the decompression at the target is too slow because of the target hardware.

    If the OP is interested in testing this its possible to unpack the inits and swap out gzip for zstd to see if imaging rates are better or worse.

    This post should help you narrow in on the spot where fog uses gzip in the imaging process: https://forums.fogproject.org/topic/9525/dedupe-storage-how-to-best-bypass-pigz-packaging/4

  • Moderator

    Disk image of a windows 10 system with some applications - 7,512,046KB (PIGZ 9)
    Unpacked - 16,390,624KB (I can’t unpack further with the tools on the system)
    zstd lvl5 - 7,286,951KB
    zstd lvl11 - 6,967,155KB
    zstd lvl17 - 6,781,375KB
    zstd lvl22 - 6,214,702KB

    I tested in a VM so compression/decompression times/speeds are not useful measurements in my case.

  • Developer

    from what i’m reading looks like this is optimized for small files, fog images are huge files…
    have you tried decompressing a fog image and compressing it with this to see how well it compresses/decompresses with this and how quickly?

Log in to reply