• Recent
    • Unsolved
    • Tags
    • Popular
    • Users
    • Groups
    • Search
    • Register
    • Login

    ZSTD Compression

    Scheduled Pinned Locked Moved Solved
    Feature Request
    13
    88
    44.9k
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • V
      VincentJ Moderator
      last edited by

      SO…

      16,390,624KB file removed from the compressed windows 10 image.
      on a 34GB RAMdisk.

      Copying the file on the RAMdisk is 740MB/s so that is well above what we need for imaging for most people.

      Lets try some things to get some numbers.

      Compression - Compressed size - Compression time - Decompression time
      zstd lvl1 - 7,940,779KB - 50 seconds - 38 seconds
      zstd lvl3 - 7,420,268KB - 75 seconds - 40 seconds
      zstd lvl5 - 7,286,951KB - 128 seconds - 40 seconds
      zstd lvl8 - 7,070,670KB - 261 seconds - 41 seconds
      zstd lvl11 - 6,967,155KB - 425 seconds - 41 seconds
      zstd lvl14 - 6,942,360KB - 674 seconds - 42 seconds
      zstd lvl17 - 6,781,375KB - 1,618 seconds - 42 seconds
      zstd lvl20 - 6,471,945KB - 2,416 seconds - 43 seconds
      zstd lvl22 - 6,214,702KB - 3,970 seconds - 45 seconds

      pigz.exe --keep -0 a:\d1p2 - 16,393,125KB - 72 seconds - 80 seconds
      pigz.exe --keep -3 a:\d1p2 - 7,783,303KB - 292 seconds - 158 seconds (157 seconds)
      pigz.exe --keep -6 a:\d1p2 - 7,535,149KB - 518 seconds - 149 seconds
      pigz.exe --keep -9 a:\d1p2 - 7,512,046KB - 1,370 seconds - 149 seconds

      Windows 10 Pro, 4 vCPU 42GB RAM with 34GB RAM Disk.
      Host XenServer 7.0, Dual E5-2603 v3, 64GB RAM, HDD Raid 1.
      Other VMs moved to the other hosts in the pool.

      Decompression seems to not use all CPU with PIGZ… around 50%…
      Compression does use all 100% CPU
      Decompression with zstd does use all CPU - but most were around 400MB/s so possibly I’m hitting some other limit.

      1 Reply Last reply Reply Quote 0
      • V
        VincentJ Moderator
        last edited by

        So, both of us have results that show zstd decompressing quicker and having better ratios.

        I’m going to reconfigure my VM to only have 1 vCPU at 1.6GHz to see if i can get more useful decompression results.

        I redid the pigz -3 decompression test twice to confirm it was slower than the others… Not what i was expecting but that is what happened.

        In my tests for compression the standard -6 on PIGZ is beaten by zstd for ratio by zstd lvl3 and completes 443 seconds faster… We could up to zsd lvl11 and have it 93 seconds quicker and save around 550MB.

        1 Reply Last reply Reply Quote 0
        • JunkhackerJ
          Junkhacker Developer
          last edited by

          i’ve been talking with @Tom-Elliott about this, and we don’t think it would be worth the effort it would take to implement zstandard. the thing is, faster decompression is kind of irrelevant for FOG at the moment. what slows down deployments at the moment is transfer speed. the only way fog would get faster is if the file size was very significantly decreased. while the compression ratio is a better with zstandard, the difference isn’t very significant until you get to the higher compression levels, where processing time becomes a big issue.

          there are other issues that deter us from adoption, but that’s the most significant reason. in fact, the single greatest reason TO adopt it would be because i think it’s really cool, lol.

          signature:
          Junkhacker
          We are here to help you. If you are unresponsive to our questions, don't expect us to be responsive to yours.

          Jaymes DriverJ 1 Reply Last reply Reply Quote 0
          • Jaymes DriverJ
            Jaymes Driver Developer @Junkhacker
            last edited by

            @Junkhacker said in ZSTD Compression:

            there are other issues that deter us from adoption, but that’s the most significant reason. in fact, the single greatest reason TO adopt it would be because i think it’s really cool, lol.

            It would be REALLY COOL 😄

            WARNING TO USERS: My comments are written completely devoid of emotion, do not mistake my concise to the point manner as a personal insult or attack.

            1 Reply Last reply Reply Quote 1
            • Tom ElliottT
              Tom Elliott
              last edited by

              THe problem isn’t the implementation or not.

              Already, with PIGZ in use the issue (beyond multiple Unicast tasks) is most often slow down in writing the information to the disk. This is especially present when one is dealing with SSD.

              It’s great that you can have “fast” decompression, but that only goes so far. You still have to write the data to disk. You have some buffer, but we’re already “decompressing” the data as fast as we can.

              Where this might be very useful, however, would be uncompressed images, compressed as the data is requested, and then placed on disk so we have a live element of diminishing the amount of data to be passed across the network. Once it’s passed to the client, the only “hold” is on the speed at which data can be pushed from ram and written to disk. Even this, however, can only do so much.

              Is it really worth implementing a new compression mechanism to maybe get a speed increase of possibly 1% during our imaging process?

              Please help us build the FOG community with everyone involved. It's not just about coding - way more we need people to test things, update documentation and most importantly work on uniting the community of people enjoying and working on FOG! Get in contact with me (chat bubble in the top right corner) if you want to join in.

              Web GUI issue? Please check apache error (debian/ubuntu: /var/log/apache2/error.log, centos/fedora/rhel: /var/log/httpd/error_log) and php-fpm log (/var/log/php*-fpm.log)

              Please support FOG if you like it: https://wiki.fogproject.org/wiki/index.php/Support_FOG

              Wayne WorkmanW 1 Reply Last reply Reply Quote 0
              • Tom ElliottT
                Tom Elliott
                last edited by

                I understand the speed would be significantly increased on upload tasks, but I don’t know how often people are uploading.

                Please help us build the FOG community with everyone involved. It's not just about coding - way more we need people to test things, update documentation and most importantly work on uniting the community of people enjoying and working on FOG! Get in contact with me (chat bubble in the top right corner) if you want to join in.

                Web GUI issue? Please check apache error (debian/ubuntu: /var/log/apache2/error.log, centos/fedora/rhel: /var/log/httpd/error_log) and php-fpm log (/var/log/php*-fpm.log)

                Please support FOG if you like it: https://wiki.fogproject.org/wiki/index.php/Support_FOG

                1 Reply Last reply Reply Quote 0
                • V
                  VincentJ Moderator
                  last edited by

                  1 vCPU 1.6GHz - the system can no longer saturate gigabit over network shares…
                  Down from 110MB/s to 82MB/s

                  Compression - Compressed size - Decompression time
                  zstd lvl1 - 7,940,779KB - 131 seconds
                  zstd lvl3 - 7,420,268KB - 134 seconds
                  zstd lvl11 - 6,967,155KB - 139 seconds
                  zstd lvl22 - 6,214,702KB - 157 seconds

                  pigz.exe --keep -6 a:\d1p2 - 7,535,149KB - 247 seconds

                  On my quad core VM PIGZ -6 only used 50MB/s decompression, zstd level 11 with a single core VM uses the same 50MB/s…
                  On the single core VM, PIGZ -6 is only 30 MB/s, the lowest zstd gets on level 22 is 39.5MB/s

                  if we use the single core numbers, writing the whole image in 247 seconds (which isn’t too much faster than expected anyway) is around 66MB/s on disk, using zstd 11 writing it in 139 seconds is 117MB/s Most SATA disks should be able to do this… It will be a push for some 2.5" disks… (I checked numbers for 2.5" and 3.5" WD Greens)

                  C 1 Reply Last reply Reply Quote 0
                  • C
                    compman @VincentJ
                    last edited by

                    Note that, since v1.1.3, there is a multithread mode available with zstd.

                    It needs to be compiled with specific flags though.
                    On linux, it means typing make zstdmt
                    For Windows, there are pre-compiled binaries in the release section : use the zstdmt one.

                    Since pigz is multi-threaded, it would be more fair to compare to zstdmt, rather than single-threaded zstd.

                    The number of threads can be selected with command -T#, like xz.

                    1 Reply Last reply Reply Quote 0
                    • V
                      VincentJ Moderator
                      last edited by

                      The version of zstd i’ve been using is using all my threads 🙂

                      1 Reply Last reply Reply Quote 0
                      • V
                        VincentJ Moderator
                        last edited by

                        Maybe you just saw the note about 1vCPU. I only reduced to 1vCPU as the numbers with 4vCPU were all so close together.

                        Also might help to simulate a ‘low end’ machine…

                        1 Reply Last reply Reply Quote 0
                        • ?
                          A Former User
                          last edited by

                          For those of us not smart enough to fully understand, can someone give me a simple comparison – in time – between the proposed compression versus the current compression?

                          I work two IT jobs, one full-time and one part-time. Between the two, we order tens of thousands of computers each year. For January 2017, this was the most ordered machine from Dell, and most other orders were also in this same power range:

                          OptiPlex 5040 Small Form Factor
                          i5-6500 Processor (Quad Core, 6MB, 3.2GHz)
                          8 GB RAM
                          256 GB SSD
                          1 Gbps NIC

                          Percentage wise, approximately how much faster/slower would the proposed compression be for this machine when deploying an image to it?

                          Tom ElliottT 1 Reply Last reply Reply Quote 0
                          • Tom ElliottT
                            Tom Elliott @A Former User
                            last edited by

                            @loosus456 it’s hard to say. Compression on upload would be phenomenal buy for deployment I don’t think there’d be a huge difference as even with our current stuff we’re mostly seeing speeds to write to disk.

                            Please help us build the FOG community with everyone involved. It's not just about coding - way more we need people to test things, update documentation and most importantly work on uniting the community of people enjoying and working on FOG! Get in contact with me (chat bubble in the top right corner) if you want to join in.

                            Web GUI issue? Please check apache error (debian/ubuntu: /var/log/apache2/error.log, centos/fedora/rhel: /var/log/httpd/error_log) and php-fpm log (/var/log/php*-fpm.log)

                            Please support FOG if you like it: https://wiki.fogproject.org/wiki/index.php/Support_FOG

                            ? 1 Reply Last reply Reply Quote 0
                            • ?
                              A Former User @Tom Elliott
                              last edited by

                              @Tom-Elliott We do upload often (about twice a month), but if the upload isn’t much, much faster and the deployment isn’t significantly faster, it probably isn’t worth it.

                              I do wonder if HyperV upload through the legacy adapter would be faster, though. That takes literal hours right now.

                              Tom ElliottT 1 Reply Last reply Reply Quote 0
                              • Tom ElliottT
                                Tom Elliott @A Former User
                                last edited by

                                @loosus456 Let’s say you upload 2 image’s a month, and you deploy 400 times a month, ultimately while upload would be “faster” you’re only increasing it during the upload process. As you still have your “setup” to create the image which is what’s taking the most of your time.

                                Please help us build the FOG community with everyone involved. It's not just about coding - way more we need people to test things, update documentation and most importantly work on uniting the community of people enjoying and working on FOG! Get in contact with me (chat bubble in the top right corner) if you want to join in.

                                Web GUI issue? Please check apache error (debian/ubuntu: /var/log/apache2/error.log, centos/fedora/rhel: /var/log/httpd/error_log) and php-fpm log (/var/log/php*-fpm.log)

                                Please support FOG if you like it: https://wiki.fogproject.org/wiki/index.php/Support_FOG

                                ? 1 Reply Last reply Reply Quote 0
                                • ?
                                  A Former User @Tom Elliott
                                  last edited by

                                  @Tom-Elliott Well, when it comes to uploading from HyperV with a legacy adapter to FOG, upload time is actually what takes most of the time. Image creation takes little time in comparison.

                                  But yes, uploading from a physical machine is quite fast.

                                  1 Reply Last reply Reply Quote 0
                                  • Tom ElliottT
                                    Tom Elliott
                                    last edited by

                                    So I think what I want to say.

                                    Seeing as this ZSTD, in what I can see here, only impacts upload speeds, is it worth the effort for a new standard and methodology of software to support when pigz/gzip is pretty much well standardized?

                                    Consider this:

                                    While capturing could be significantly improved, the deploy (which i imagine happens far more often that capture tasks) would not see a significant boost. Now if you have 10 unicast tasks with ZSTD that are able to deploy much more reliably and faster, this would be an improvement worth considering.

                                    So if you all want to try this, build your init’s using the Wiki instructions and the information from the buildroot source already provided in every installation of FOG and run tests. Right now, as I’m seeing it, implementing this has been focused solely on compression after the image has been captured previously. Has anybody actually “compressed” the image during a real “capture” task?

                                    Things to work with:

                                    1. integration into the init’s as a real utility for us to use.
                                    2. Do the same results happen on capture (maybe I missed this part).
                                    3. Do multiple unicast deploy’s deploy faster using this mechanism?

                                    Please help us build the FOG community with everyone involved. It's not just about coding - way more we need people to test things, update documentation and most importantly work on uniting the community of people enjoying and working on FOG! Get in contact with me (chat bubble in the top right corner) if you want to join in.

                                    Web GUI issue? Please check apache error (debian/ubuntu: /var/log/apache2/error.log, centos/fedora/rhel: /var/log/httpd/error_log) and php-fpm log (/var/log/php*-fpm.log)

                                    Please support FOG if you like it: https://wiki.fogproject.org/wiki/index.php/Support_FOG

                                    1 Reply Last reply Reply Quote 0
                                    • Tom ElliottT
                                      Tom Elliott
                                      last edited by

                                      So for what it’s worth I’m giving a shot, I have not coding anything to use zstd, but I am running an installation/build test that will hopefully build the init’s with the necessary zstd binaries so others can test internally.

                                      Please help us build the FOG community with everyone involved. It's not just about coding - way more we need people to test things, update documentation and most importantly work on uniting the community of people enjoying and working on FOG! Get in contact with me (chat bubble in the top right corner) if you want to join in.

                                      Web GUI issue? Please check apache error (debian/ubuntu: /var/log/apache2/error.log, centos/fedora/rhel: /var/log/httpd/error_log) and php-fpm log (/var/log/php*-fpm.log)

                                      Please support FOG if you like it: https://wiki.fogproject.org/wiki/index.php/Support_FOG

                                      1 Reply Last reply Reply Quote 0
                                      • Wayne WorkmanW
                                        Wayne Workman @Tom Elliott
                                        last edited by

                                        @Tom-Elliott said in ZSTD Compression:

                                        Already, with PIGZ in use the issue (beyond multiple Unicast tasks) is most often slow down in writing the information to the disk.

                                        As I was reading through this thread, this is exactly what I thought- that the biggest benefit would come with multiple simultaneous unicast deployments. Maybe instead of having Max Clients set at 2 I could do 3.

                                        And who knows, maybe I’ll squish the images enough to store 1 extra.

                                        Please help us build the FOG community with everyone involved. It's not just about coding - way more we need people to test things, update documentation and most importantly work on uniting the community of people enjoying and working on FOG!
                                        Daily Clean Installation Results:
                                        https://fogtesting.fogproject.us/
                                        FOG Reporting:
                                        https://fog-external-reporting-results.fogproject.us/

                                        1 Reply Last reply Reply Quote 0
                                        • V
                                          VincentJ Moderator
                                          last edited by

                                          @Tom-Elliott Thanks for putting it into the init.

                                          Would it be as simple as searching through the code for the commands for imaging and changing them to use zstd instead of pigz or would there be more complicated things involved due to the way the commands are generated?

                                          Do you know if most people use multicast or just do multiple unicast for deployments? I have never got multicast to work fully and always end up with each client downloading on it’s own. I have usually had my server set to 4 clients at once except when i had 10GbE and 2Gbit links between MDF and IDF… On that machine i used 8 and with ZFS caching I had no problems with the disk IO of so many transfers.

                                          If we can get improvements via increasing those numbers then it makes things a bit more worth the effort to speed up people’s deployments.

                                          as for uploading… I also have to upload every month or so and with one of my clients i have a 2 hour time window to do all maintenance so uploading sometimes gets delayed as it can take a considerable amount of time.

                                          The other benefit of reduced file size would also help, in my case, by reducing the sync time between sites over WAN.

                                          As people’s machines become more powerful then we can scale with them instead of being held back by the lack of speed in PIGZ. 10GbE is coming down in price and SSD/NVMe/HDD are getting better all the time.

                                          Wayne WorkmanW 1 Reply Last reply Reply Quote 0
                                          • Wayne WorkmanW
                                            Wayne Workman @VincentJ
                                            last edited by

                                            @VincentJ said in ZSTD Compression:

                                            Do you know if most people use multicast or just do multiple unicast for deployments?

                                            It’s a mix.

                                            Please help us build the FOG community with everyone involved. It's not just about coding - way more we need people to test things, update documentation and most importantly work on uniting the community of people enjoying and working on FOG!
                                            Daily Clean Installation Results:
                                            https://fogtesting.fogproject.us/
                                            FOG Reporting:
                                            https://fog-external-reporting-results.fogproject.us/

                                            1 Reply Last reply Reply Quote 0
                                            • 1
                                            • 2
                                            • 3
                                            • 4
                                            • 5
                                            • 2 / 5
                                            • First post
                                              Last post

                                            219

                                            Online

                                            12.0k

                                            Users

                                            17.3k

                                            Topics

                                            155.2k

                                            Posts
                                            Copyright © 2012-2024 FOG Project