• Recent
  • Unsolved
  • Tags
  • Popular
  • Users
  • Groups
  • Search
  • Register
  • Login
  • Recent
  • Unsolved
  • Tags
  • Popular
  • Users
  • Groups
  • Search
  • Register
  • Login

Rate is at a slow crawl when trying to deploy/capture image

Scheduled Pinned Locked Moved Unsolved
FOG Problems
6
21
5.2k
Loading More Posts
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • H
    hvaransky
    last edited by Feb 12, 2018, 2:41 PM

    Very much a newbie FOG user here, so if this question sounds stupid, I apologize!

    I am trying to both deploy and capture an image and everything starts out great until I get to the actual deploy/capture. The rate starts out High, but within 30 minutes, goes so low (20-60MB/Min) that I end up canceling the task because it’s taking over 24 hours to actually complete. I did let a deployed image go and it did actually complete the deployment and image seems to be running okay on the computer that I put it in. The image that I deployed was made last July and it worked great (rate ran at about 1.5GB/Min) the whole summer while re-imaging. I’ve only noticed it be this slow since about December. I have tried rebooting FOG server and restarting the services but nothing helps. We have not ran any updates on the FOG server or updated it. Currently running FOG 1.4.4 or Ubuntu 14.04.

    W 1 Reply Last reply Feb 12, 2018, 3:15 PM Reply Quote 0
    • W
      Wayne Workman @hvaransky
      last edited by Feb 12, 2018, 3:15 PM

      @hvaransky Most likely not a fog problem, just so you know. You should start with general Linux troubleshooting - and focus on network.

      There’s a utility for linux called ethtool that you can install & use to see the configuration of the local NIC of a linux OS. Run ethtool on your fog server and see if it’s configured at 1Gbps. Also, make sure the link between the fog server and target host is 1Gbps all the way through. A bad/kinked/broke/borked patch cable can cause a 1Gbps link to be derated down to 100Mbps or even 10Mbps. There’s another tool called iperf to test network thorughput between two linux boxes - install that, and live-boot the target computer using a Ubuntu Desktop disk or something - run some iperf tests to see throughput.

      Please help us build the FOG community with everyone involved. It's not just about coding - way more we need people to test things, update documentation and most importantly work on uniting the community of people enjoying and working on FOG!
      Daily Clean Installation Results:
      https://fogtesting.fogproject.us/
      FOG Reporting:
      https://fog-external-reporting-results.fogproject.us/

      H 2 Replies Last reply Mar 8, 2018, 1:57 PM Reply Quote 1
      • H
        hvaransky @Wayne Workman
        last edited by Mar 8, 2018, 1:57 PM

        @wayne-workman Thank you!! Will definitely try those options!

        1 Reply Last reply Reply Quote 0
        • H
          hvaransky @Wayne Workman
          last edited by Mar 9, 2018, 3:31 PM

          @wayne-workman I did run the ethtool utility and the speed is set at 1Gbps. I also did an ifconfig and am getting RX at 419.0MB and TX at 20.6GB. I restarted the server and reset the services yesterday as well. I thought maybe it was because I was trying to image during the day when the network is being hit hard, so I waited until most everyone left, but it still took over 7 hours to complete. Connection between server and host computer looks to be okay as well. I did look back at the imaging reports, and the same image only took 1 hour and 7 minutes last July.

          W 1 Reply Last reply Mar 9, 2018, 3:45 PM Reply Quote 0
          • W
            Wayne Workman @hvaransky
            last edited by Mar 9, 2018, 3:45 PM

            @hvaransky Where is the computer you were imaging? Is it connected to the same physical switch the fog server is? If not can you move it to that switch and try again?

            Please help us build the FOG community with everyone involved. It's not just about coding - way more we need people to test things, update documentation and most importantly work on uniting the community of people enjoying and working on FOG!
            Daily Clean Installation Results:
            https://fogtesting.fogproject.us/
            FOG Reporting:
            https://fog-external-reporting-results.fogproject.us/

            H 1 Reply Last reply Mar 9, 2018, 4:39 PM Reply Quote 0
            • H
              hvaransky @Wayne Workman
              last edited by Mar 9, 2018, 4:39 PM

              @wayne-workman The computer that I imaged last night is in the same building, but is through a different switch. I tried an image last week that took 7 hours and 44 minutes and was connected to the same switch. They are two different images, but both were made around the same time for HP desktops.

              W 1 Reply Last reply Mar 9, 2018, 5:21 PM Reply Quote 0
              • W
                Wayne Workman @hvaransky
                last edited by Mar 9, 2018, 5:21 PM

                @hvaransky For the image that is deploying slowly:

                • What is the compression settings set to?
                • How big does the image say it is on the FOG Server?

                Also, please post a screenshot of the top command on your FOG Server during the time when the largest partition is being written to disk.

                Please help us build the FOG community with everyone involved. It's not just about coding - way more we need people to test things, update documentation and most importantly work on uniting the community of people enjoying and working on FOG!
                Daily Clean Installation Results:
                https://fogtesting.fogproject.us/
                FOG Reporting:
                https://fog-external-reporting-results.fogproject.us/

                H 1 Reply Last reply Mar 12, 2018, 3:22 PM Reply Quote 0
                • H
                  hvaransky @Wayne Workman
                  last edited by Mar 12, 2018, 3:22 PM

                  @wayne-workman I took 2 because I’m not exactly sure what we’re looking for. The compression is set at 22 and the image size is 465GB.

                  0_1520868079537_Screenshot from 2018-03-12 11:10:46.png

                  0_1520867985169_Screenshot from 2018-03-12 11:10:52.png

                  T W 2 Replies Last reply Mar 12, 2018, 3:59 PM Reply Quote 0
                  • T
                    Tom Elliott @hvaransky
                    last edited by Mar 12, 2018, 3:59 PM

                    @hvaransky turn compression down to 19. 20 and 22 on zstd requires a lot of memory. While the selector lets you choose that high, it is not recommended. 19, I’ve found, is the highest you can go without causing an issue with the client machine.

                    Please help us build the FOG community with everyone involved. It's not just about coding - way more we need people to test things, update documentation and most importantly work on uniting the community of people enjoying and working on FOG! Get in contact with me (chat bubble in the top right corner) if you want to join in.

                    Web GUI issue? Please check apache error (debian/ubuntu: /var/log/apache2/error.log, centos/fedora/rhel: /var/log/httpd/error_log) and php-fpm log (/var/log/php*-fpm.log)

                    Please support FOG if you like it: https://wiki.fogproject.org/wiki/index.php/Support_FOG

                    1 Reply Last reply Reply Quote 1
                    • W
                      Wayne Workman @hvaransky
                      last edited by Wayne Workman Mar 12, 2018, 10:20 AM Mar 12, 2018, 4:19 PM

                      @hvaransky I agree with Tom, that compression setting is too high. I wanted the top output to look for any potential load issues; I don’t see any.

                      Please help us build the FOG community with everyone involved. It's not just about coding - way more we need people to test things, update documentation and most importantly work on uniting the community of people enjoying and working on FOG!
                      Daily Clean Installation Results:
                      https://fogtesting.fogproject.us/
                      FOG Reporting:
                      https://fog-external-reporting-results.fogproject.us/

                      1 Reply Last reply Reply Quote 0
                      • T
                        Tom Elliott
                        last edited by Mar 12, 2018, 4:22 PM

                        I should add, the higher the compression, the slower the image will be captured. For gzip, maximum compression allowed is 9, and it is VERY slow. Zstd maximum compression is 22, though 19 is the highest you can do with out running into memory issues, I’ve found. 19 on zstd is still faster than gzip on 9 from my experience. The reason higher compression takes longer is the work that has to be done to actually compress it the furthest. Hopefully this will lend some insight.

                        Please help us build the FOG community with everyone involved. It's not just about coding - way more we need people to test things, update documentation and most importantly work on uniting the community of people enjoying and working on FOG! Get in contact with me (chat bubble in the top right corner) if you want to join in.

                        Web GUI issue? Please check apache error (debian/ubuntu: /var/log/apache2/error.log, centos/fedora/rhel: /var/log/httpd/error_log) and php-fpm log (/var/log/php*-fpm.log)

                        Please support FOG if you like it: https://wiki.fogproject.org/wiki/index.php/Support_FOG

                        H 1 Reply Last reply Mar 13, 2018, 1:19 PM Reply Quote 0
                        • H
                          hvaransky @Tom Elliott
                          last edited by Mar 13, 2018, 1:19 PM

                          @tom-elliott I understand about the compression rates, but I’m not sure about the Gzip/ZSTD stuff. All of our images are set to partclone Gzip as per screenshot. Am I safe to assume that I have something set up wrong in the image management?

                          0_1520947088026_fog image management.png

                          T 1 Reply Last reply Mar 13, 2018, 4:31 PM Reply Quote 0
                          • T
                            Tom Elliott @hvaransky
                            last edited by Mar 13, 2018, 4:31 PM

                            @hvaransky the maximum gzip can compress is -9. If you set compression to 22 and gzip is compression manager, it will set to -9. This is very very slow.

                            Please help us build the FOG community with everyone involved. It's not just about coding - way more we need people to test things, update documentation and most importantly work on uniting the community of people enjoying and working on FOG! Get in contact with me (chat bubble in the top right corner) if you want to join in.

                            Web GUI issue? Please check apache error (debian/ubuntu: /var/log/apache2/error.log, centos/fedora/rhel: /var/log/httpd/error_log) and php-fpm log (/var/log/php*-fpm.log)

                            Please support FOG if you like it: https://wiki.fogproject.org/wiki/index.php/Support_FOG

                            H 1 Reply Last reply Mar 13, 2018, 6:14 PM Reply Quote 0
                            • H
                              hvaransky @Tom Elliott
                              last edited by Mar 13, 2018, 6:14 PM

                              @tom-elliott I changed the rate down to 6 and deployed the image again. It started out at 10GB/Min, but within 10 minutes it is down to 247MB/Min. I’m at 44% completed with it running for an hour and 22 minutes, which is definitely MUCH better, but is there something else I need to adjust to get it even quicker? The rate is still dropping (it is going down at about 2MB every 3 minutes or so). Sorry for all the questions, I really am a newbie with FOG.

                              JunkhackerJ 1 Reply Last reply Mar 13, 2018, 6:15 PM Reply Quote 0
                              • JunkhackerJ
                                Junkhacker Developer @hvaransky
                                last edited by Junkhacker Mar 13, 2018, 12:18 PM Mar 13, 2018, 6:15 PM

                                @hvaransky if you’re wanting high compression, you’ll want to switch to zstd. it’s faster and compresses better. don’t bother with maxing it out though, you’ll triple the time it takes to compress and only save a few % in size. comparing gzip -6 to zstd -11 ( our recommended settings ) my testing showed zstd was 10% faster at capture, 26% smaller in final file size, and 36% faster on deployment

                                signature:
                                Junkhacker
                                We are here to help you. If you are unresponsive to our questions, don't expect us to be responsive to yours.

                                H 1 Reply Last reply Mar 13, 2018, 6:18 PM Reply Quote 0
                                • H
                                  hvaransky @Junkhacker
                                  last edited by Mar 13, 2018, 6:18 PM

                                  @junkhacker We’re not really worried about compression size per say. We would rather it take less time for an image to deploy. (It was running at about 35 minutes per machine last summer and is now taking more than 7 hours to complete.) On the plus size, once changing compression size on the current image I’m deploying, it is predicted to only take about 3 1/2 hours to finish!

                                  T 1 Reply Last reply Mar 13, 2018, 6:30 PM Reply Quote 0
                                  • T
                                    Tom Elliott @hvaransky
                                    last edited by Mar 13, 2018, 6:30 PM

                                    @hvaransky There’s a lot of variables to consider in deploy, or capture, speed.

                                    First is your network.
                                    Second is the disks writing to/reading from.
                                    Third is the compression.

                                    As @Junkhacker stated, finding the “goldilocks” zone of compression is also useful. For example, gzip at -9 will take a long time to capture and you don’t really gain much compression increase. That and speed to deploy isn’t much better (partly due to the compression already reaching it’s peak).

                                    Less data to deploy = faster network transfer, but if your disk is really old or slow the speed could be limited there. (This on deploy and capture).

                                    I’ve found -11 on Zstd to be a good zone, though I don’t have much disk space, so I use zstd on 19 (which i find is still faster than gzip on 9) during capture.

                                    As I said, there’s a lot of variables to consider.

                                    Also, if your fog server is replicating images and files at the same time as you performing a capture or deploy, chances are likely the slowdown is due to the server being used at the same time as the leads have to jump around the server’s hard drive.

                                    You could try rebooting the fog server though. After all, the server is still a computer, and while it’s not necessarily a normal requirement, rebooting might solve many of the problems you’re seeing.

                                    Also, look at your network, if you have a 1Gbps network, but a switch is a 100/Mbps, the maximum capture/deploy across network to that machine would be limited to 100 Mbps (about 2.5GB/min) Where 1Gbps would give about 7.5GB/min. (This is for uncompressed data though.) The speed, is also (as stated earlier) limited to the hard drives of both the client and server machines. Most often I’ve found the slowdown is not the networks, rather they’re the hdd’s reading/writing from/to.

                                    Please help us build the FOG community with everyone involved. It's not just about coding - way more we need people to test things, update documentation and most importantly work on uniting the community of people enjoying and working on FOG! Get in contact with me (chat bubble in the top right corner) if you want to join in.

                                    Web GUI issue? Please check apache error (debian/ubuntu: /var/log/apache2/error.log, centos/fedora/rhel: /var/log/httpd/error_log) and php-fpm log (/var/log/php*-fpm.log)

                                    Please support FOG if you like it: https://wiki.fogproject.org/wiki/index.php/Support_FOG

                                    1 Reply Last reply Reply Quote 1
                                    • george1421G
                                      george1421 Moderator
                                      last edited by Mar 13, 2018, 8:19 PM

                                      If you have some time, I’d like you to do some system benchmarking. Maybe we can find the source of your issues.

                                      The first and easiest to test is local disk subsystem. From a linux command prompt on your fog server run these commands.

                                      sudo dd if=/dev/zero of=/tmp/test1.img bs=1G count=1 oflag=direct

                                      run it 3 times and average the output which should look something like this:

                                      1+0 records in
                                      1+0 records out
                                      1073741824 bytes (1.1 GB) copied, 13.9599 s, 76.9 MB/s
                                      

                                      Then run this command 3 times and average the output.
                                      sudo echo 3 | tee /proc/sys/vm/drop_caches && time dd if=/tmp/test1.img of=/dev/null bs=8k

                                      Post the results here.

                                      And finally we need to remove the 1GB file we created.
                                      sudo rm -f /tmp/test1.img

                                      The next bit is network throughput. But lets see your disk speeds to start.

                                      Please help us build the FOG community with everyone involved. It's not just about coding - way more we need people to test things, update documentation and most importantly work on uniting the community of people enjoying and working on FOG!

                                      H 1 Reply Last reply Mar 14, 2018, 1:35 PM Reply Quote 0
                                      • H
                                        hvaransky @george1421
                                        last edited by Mar 14, 2018, 1:35 PM

                                        @george1421 I couldn’t get the 2nd part of the command line to work as I kept getting permission denied. I was able to use the built in benchmarking on the disks menu to come up with the screenshot below:

                                        0_1521034320684_benchmark test.png

                                        We also double-checked all of the switches last night and all seem to be set properly and rebooted the FOG server. I am going to try to capture a new image with the ZSTD compression instead of Gzip. On the downside, after changing compressions on the image and it starting out super high transfer rate yesterday, it still took over 8 hours to complete and the rate almost bottomed out by the time it was an hour in.

                                        george1421G 1 Reply Last reply Mar 14, 2018, 2:22 PM Reply Quote 0
                                        • george1421G
                                          george1421 Moderator @hvaransky
                                          last edited by Mar 14, 2018, 2:22 PM

                                          @hvaransky If you run sudo su - first then you should be able to run the commands without the sudo at all.

                                          It would be interesting from a bench marking standpoint to use the same tool to give us the same relative number.

                                          But based on the benchmark screen, I would expect you have either a SATA SSD or a multi hard drive (>6) disk array. Maybe raid 10. So your slowness is probably not your disk subsystem. So the next steps are network testing.

                                          Please help us build the FOG community with everyone involved. It's not just about coding - way more we need people to test things, update documentation and most importantly work on uniting the community of people enjoying and working on FOG!

                                          1 Reply Last reply Reply Quote 0
                                          • 1
                                          • 2
                                          • 1 / 2
                                          • First post
                                            Last post

                                          167

                                          Online

                                          12.0k

                                          Users

                                          17.3k

                                          Topics

                                          155.2k

                                          Posts
                                          Copyright © 2012-2024 FOG Project