• Recent
    • Unsolved
    • Tags
    • Popular
    • Users
    • Groups
    • Search
    • Register
    • Login

    ZSTD Compression

    Scheduled Pinned Locked Moved Solved
    Feature Request
    13
    88
    45.5k
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • V
      VincentJ Moderator
      last edited by

      Disk image of a windows 10 system with some applications - 7,512,046KB (PIGZ 9)
      Unpacked - 16,390,624KB (I can’t unpack further with the tools on the system)
      zstd lvl5 - 7,286,951KB
      zstd lvl11 - 6,967,155KB
      zstd lvl17 - 6,781,375KB
      zstd lvl22 - 6,214,702KB

      I tested in a VM so compression/decompression times/speeds are not useful measurements in my case.

      george1421G 1 Reply Last reply Reply Quote 0
      • george1421G
        george1421 Moderator @VincentJ
        last edited by

        @VincentJ We have to remember here that the target client does all of the heavy lifting during image capture/deploy. (I’m only speaking in general terms here) You can have the greatest compress / decompression ratio, but if the penalty is time then what you are saving in over the wire time is lost because the decompression at the target is too slow because of the target hardware.

        If the OP is interested in testing this its possible to unpack the inits and swap out gzip for zstd to see if imaging rates are better or worse.

        This post should help you narrow in on the spot where fog uses gzip in the imaging process: https://forums.fogproject.org/topic/9525/dedupe-storage-how-to-best-bypass-pigz-packaging/4

        Please help us build the FOG community with everyone involved. It's not just about coding - way more we need people to test things, update documentation and most importantly work on uniting the community of people enjoying and working on FOG!

        1 Reply Last reply Reply Quote 0
        • V
          VincentJ Moderator
          last edited by VincentJ

          https://code.facebook.com/posts/1658392934479273/smaller-and-faster-data-compression-with-zstandard/

          They did a comparison between the gzip cli and the zstd cli and compression with zstd was around 5 times faster and decompression was around 3.5 times faster…

          zstd is tunable, which is why i included four levels, which all beat gzip9 for compression ratio. If using zstd, we can deliver the image faster (because it’s smaller) and decompress the data faster once it arrives at the client… where is the loss?

          I don’t have a testing infrastructure i can use at the moment that would yield verifiable speed results. My storage array is not built for IOPS and there would be live VMs also consuming bandwidth on the array and the network which would affect the result. I would also be reading an image off a drive while trying to write the same image to the same drive within a VM.

          JunkhackerJ 1 Reply Last reply Reply Quote 0
          • JunkhackerJ
            Junkhacker Developer @VincentJ
            last edited by

            @VincentJ if you’re comparing for performance, you need to compare zstd with pigz. pigz is what we use, and it is multithreaded. it is significantly faster than regular gzip while maintaining compatibility with it’s compression algorithm.

            signature:
            Junkhacker
            We are here to help you. If you are unresponsive to our questions, don't expect us to be responsive to yours.

            1 Reply Last reply Reply Quote 0
            • x23piracyX
              x23piracy
              last edited by

              interesting: https://news.ycombinator.com/item?id=12399804

              ║▌║█║▌│║▌║▌█

              1 Reply Last reply Reply Quote 0
              • V
                VincentJ Moderator
                last edited by VincentJ

                I am trying to setup something to test speed with local disks and a ramdisk…

                I have a copy of pigz for windows and the zstd version of 7zip…

                I’m setting up a quad core VM on one of my hosts with over 32GB RAM. CPU in the host is dual E5-2603V3 and storage is HDD Raid 1.

                Which compression levels in pigz do you want me to try and test?

                I am not entirely sure this gets round all of the problems speed wise which i could have that would make results not ideal… but it’s the best i can do.

                JunkhackerJ 1 Reply Last reply Reply Quote 0
                • JunkhackerJ
                  Junkhacker Developer @VincentJ
                  last edited by

                  @VincentJ default compression level is 6. testing found that it had the best compression/performance ratio for us. i don’t think many people change it.

                  signature:
                  Junkhacker
                  We are here to help you. If you are unresponsive to our questions, don't expect us to be responsive to yours.

                  Jaymes DriverJ 1 Reply Last reply Reply Quote 3
                  • Jaymes DriverJ
                    Jaymes Driver Developer @Junkhacker
                    last edited by

                    @Junkhacker I have never had to adjust the compression. 3-5 minutes to image a client machine is well within my acceptance.

                    WARNING TO USERS: My comments are written completely devoid of emotion, do not mistake my concise to the point manner as a personal insult or attack.

                    george1421G 1 Reply Last reply Reply Quote 2
                    • george1421G
                      george1421 Moderator @Jaymes Driver
                      last edited by

                      @Jaymes-Driver Same here. 3-5 minutes is more than sufficient considering that MDT and WDS take much longer to produce a finished product. No need to change it from the defaults unless you are dealing with under powered target systems.

                      Please help us build the FOG community with everyone involved. It's not just about coding - way more we need people to test things, update documentation and most importantly work on uniting the community of people enjoying and working on FOG!

                      1 Reply Last reply Reply Quote 1
                      • JunkhackerJ
                        Junkhacker Developer
                        last edited by

                        for what it’s worth, i’m testing a few things with this new compression method. i’ll share my results.

                        signature:
                        Junkhacker
                        We are here to help you. If you are unresponsive to our questions, don't expect us to be responsive to yours.

                        1 Reply Last reply Reply Quote 1
                        • V
                          VincentJ Moderator
                          last edited by

                          🙂 I have more results in progress on the RAMdisk. Hopefully i’ll get them done tonight.

                          Q 1 Reply Last reply Reply Quote 2
                          • Q
                            Quazz Moderator @VincentJ
                            last edited by

                            @VincentJ From what I’ve read this compression algorithm is targetted specifically at modern CPUs, leveragering their instructions sets, which older CPUs will lack.

                            Meaning that tests will be needed on older hardware, as there will be people using it for quite some time. If there’s a huge time penalty there, it will still be a no go for a lot of people.

                            1 Reply Last reply Reply Quote 1
                            • JunkhackerJ
                              Junkhacker Developer
                              last edited by Junkhacker

                              ok, here are my pseudo scientific results:

                              pigz vs pzstd (parallel implementation of zstandard, experimental)
                              tests were performed using a windows 7 image
                              (larger of 2 partitions only)
                              uncompressed image file size:34650439624

                              (de)compression tests were performed as closely as i could to emulate fog’s operation methods (without doing too much work :P). files were cat-ed from a mounted nfs share and piped into the programs, with results saved to an SSD. test machine was running Lubuntu instead of a custom FOS init, because i’m lazy.

                              pigz -6 compression
                              duration: 6:06
                              file size: 17548659028

                              pigz -6 decompression
                              duration: 6:00

                              pzstd default (3?) compression
                              duration: 5:16
                              file size: 16967988207

                              pzstd decompression default compression file
                              duration: 3:17

                              pzstd -6 compression
                              duration: 6:11
                              file size: 16247155611

                              pzstd decompression -6 compression file
                              duration: 3:16

                              pzstd -9 compression
                              duration: 10:00
                              file size: 16084180231

                              pzstd decompression -9 compression file
                              duration: 3:21

                              Edited to add zst compression level 6

                              signature:
                              Junkhacker
                              We are here to help you. If you are unresponsive to our questions, don't expect us to be responsive to yours.

                              Q 1 Reply Last reply Reply Quote 3
                              • Q
                                Quazz Moderator @Junkhacker
                                last edited by

                                @Junkhacker Interesting, what kind of CPU did the test device have?

                                JunkhackerJ 1 Reply Last reply Reply Quote 0
                                • JunkhackerJ
                                  Junkhacker Developer @Quazz
                                  last edited by Junkhacker

                                  @Quazz it’s an Optiplex 3020. i5

                                  it may be worth pointing out that the pigz performance from my tests is not really representative of what i typically see when actually fog imaging this machine. testing was faster at compression by a significant amount, and a bit slower at decompression, than my experience. not sure what that says about the usefulness of these tests.

                                  signature:
                                  Junkhacker
                                  We are here to help you. If you are unresponsive to our questions, don't expect us to be responsive to yours.

                                  1 Reply Last reply Reply Quote 0
                                  • V
                                    VincentJ Moderator
                                    last edited by

                                    SO…

                                    16,390,624KB file removed from the compressed windows 10 image.
                                    on a 34GB RAMdisk.

                                    Copying the file on the RAMdisk is 740MB/s so that is well above what we need for imaging for most people.

                                    Lets try some things to get some numbers.

                                    Compression - Compressed size - Compression time - Decompression time
                                    zstd lvl1 - 7,940,779KB - 50 seconds - 38 seconds
                                    zstd lvl3 - 7,420,268KB - 75 seconds - 40 seconds
                                    zstd lvl5 - 7,286,951KB - 128 seconds - 40 seconds
                                    zstd lvl8 - 7,070,670KB - 261 seconds - 41 seconds
                                    zstd lvl11 - 6,967,155KB - 425 seconds - 41 seconds
                                    zstd lvl14 - 6,942,360KB - 674 seconds - 42 seconds
                                    zstd lvl17 - 6,781,375KB - 1,618 seconds - 42 seconds
                                    zstd lvl20 - 6,471,945KB - 2,416 seconds - 43 seconds
                                    zstd lvl22 - 6,214,702KB - 3,970 seconds - 45 seconds

                                    pigz.exe --keep -0 a:\d1p2 - 16,393,125KB - 72 seconds - 80 seconds
                                    pigz.exe --keep -3 a:\d1p2 - 7,783,303KB - 292 seconds - 158 seconds (157 seconds)
                                    pigz.exe --keep -6 a:\d1p2 - 7,535,149KB - 518 seconds - 149 seconds
                                    pigz.exe --keep -9 a:\d1p2 - 7,512,046KB - 1,370 seconds - 149 seconds

                                    Windows 10 Pro, 4 vCPU 42GB RAM with 34GB RAM Disk.
                                    Host XenServer 7.0, Dual E5-2603 v3, 64GB RAM, HDD Raid 1.
                                    Other VMs moved to the other hosts in the pool.

                                    Decompression seems to not use all CPU with PIGZ… around 50%…
                                    Compression does use all 100% CPU
                                    Decompression with zstd does use all CPU - but most were around 400MB/s so possibly I’m hitting some other limit.

                                    1 Reply Last reply Reply Quote 0
                                    • V
                                      VincentJ Moderator
                                      last edited by

                                      So, both of us have results that show zstd decompressing quicker and having better ratios.

                                      I’m going to reconfigure my VM to only have 1 vCPU at 1.6GHz to see if i can get more useful decompression results.

                                      I redid the pigz -3 decompression test twice to confirm it was slower than the others… Not what i was expecting but that is what happened.

                                      In my tests for compression the standard -6 on PIGZ is beaten by zstd for ratio by zstd lvl3 and completes 443 seconds faster… We could up to zsd lvl11 and have it 93 seconds quicker and save around 550MB.

                                      1 Reply Last reply Reply Quote 0
                                      • JunkhackerJ
                                        Junkhacker Developer
                                        last edited by

                                        i’ve been talking with @Tom-Elliott about this, and we don’t think it would be worth the effort it would take to implement zstandard. the thing is, faster decompression is kind of irrelevant for FOG at the moment. what slows down deployments at the moment is transfer speed. the only way fog would get faster is if the file size was very significantly decreased. while the compression ratio is a better with zstandard, the difference isn’t very significant until you get to the higher compression levels, where processing time becomes a big issue.

                                        there are other issues that deter us from adoption, but that’s the most significant reason. in fact, the single greatest reason TO adopt it would be because i think it’s really cool, lol.

                                        signature:
                                        Junkhacker
                                        We are here to help you. If you are unresponsive to our questions, don't expect us to be responsive to yours.

                                        Jaymes DriverJ 1 Reply Last reply Reply Quote 0
                                        • Jaymes DriverJ
                                          Jaymes Driver Developer @Junkhacker
                                          last edited by

                                          @Junkhacker said in ZSTD Compression:

                                          there are other issues that deter us from adoption, but that’s the most significant reason. in fact, the single greatest reason TO adopt it would be because i think it’s really cool, lol.

                                          It would be REALLY COOL 😄

                                          WARNING TO USERS: My comments are written completely devoid of emotion, do not mistake my concise to the point manner as a personal insult or attack.

                                          1 Reply Last reply Reply Quote 1
                                          • Tom ElliottT
                                            Tom Elliott
                                            last edited by

                                            THe problem isn’t the implementation or not.

                                            Already, with PIGZ in use the issue (beyond multiple Unicast tasks) is most often slow down in writing the information to the disk. This is especially present when one is dealing with SSD.

                                            It’s great that you can have “fast” decompression, but that only goes so far. You still have to write the data to disk. You have some buffer, but we’re already “decompressing” the data as fast as we can.

                                            Where this might be very useful, however, would be uncompressed images, compressed as the data is requested, and then placed on disk so we have a live element of diminishing the amount of data to be passed across the network. Once it’s passed to the client, the only “hold” is on the speed at which data can be pushed from ram and written to disk. Even this, however, can only do so much.

                                            Is it really worth implementing a new compression mechanism to maybe get a speed increase of possibly 1% during our imaging process?

                                            Please help us build the FOG community with everyone involved. It's not just about coding - way more we need people to test things, update documentation and most importantly work on uniting the community of people enjoying and working on FOG! Get in contact with me (chat bubble in the top right corner) if you want to join in.

                                            Web GUI issue? Please check apache error (debian/ubuntu: /var/log/apache2/error.log, centos/fedora/rhel: /var/log/httpd/error_log) and php-fpm log (/var/log/php*-fpm.log)

                                            Please support FOG if you like it: https://wiki.fogproject.org/wiki/index.php/Support_FOG

                                            Wayne WorkmanW 1 Reply Last reply Reply Quote 0
                                            • 1
                                            • 2
                                            • 3
                                            • 4
                                            • 5
                                            • 1 / 5
                                            • First post
                                              Last post

                                            174

                                            Online

                                            12.0k

                                            Users

                                            17.3k

                                            Topics

                                            155.2k

                                            Posts
                                            Copyright © 2012-2024 FOG Project