Rate is at a slow crawl when trying to deploy/capture image
-
@tom-elliott I understand about the compression rates, but I’m not sure about the Gzip/ZSTD stuff. All of our images are set to partclone Gzip as per screenshot. Am I safe to assume that I have something set up wrong in the image management?
-
@hvaransky the maximum gzip can compress is -9. If you set compression to 22 and gzip is compression manager, it will set to -9. This is very very slow.
-
@tom-elliott I changed the rate down to 6 and deployed the image again. It started out at 10GB/Min, but within 10 minutes it is down to 247MB/Min. I’m at 44% completed with it running for an hour and 22 minutes, which is definitely MUCH better, but is there something else I need to adjust to get it even quicker? The rate is still dropping (it is going down at about 2MB every 3 minutes or so). Sorry for all the questions, I really am a newbie with FOG.
-
@hvaransky if you’re wanting high compression, you’ll want to switch to zstd. it’s faster and compresses better. don’t bother with maxing it out though, you’ll triple the time it takes to compress and only save a few % in size. comparing gzip -6 to zstd -11 ( our recommended settings ) my testing showed zstd was 10% faster at capture, 26% smaller in final file size, and 36% faster on deployment
-
@junkhacker We’re not really worried about compression size per say. We would rather it take less time for an image to deploy. (It was running at about 35 minutes per machine last summer and is now taking more than 7 hours to complete.) On the plus size, once changing compression size on the current image I’m deploying, it is predicted to only take about 3 1/2 hours to finish!
-
@hvaransky There’s a lot of variables to consider in deploy, or capture, speed.
First is your network.
Second is the disks writing to/reading from.
Third is the compression.As @Junkhacker stated, finding the “goldilocks” zone of compression is also useful. For example, gzip at -9 will take a long time to capture and you don’t really gain much compression increase. That and speed to deploy isn’t much better (partly due to the compression already reaching it’s peak).
Less data to deploy = faster network transfer, but if your disk is really old or slow the speed could be limited there. (This on deploy and capture).
I’ve found -11 on Zstd to be a good zone, though I don’t have much disk space, so I use zstd on 19 (which i find is still faster than gzip on 9) during capture.
As I said, there’s a lot of variables to consider.
Also, if your fog server is replicating images and files at the same time as you performing a capture or deploy, chances are likely the slowdown is due to the server being used at the same time as the leads have to jump around the server’s hard drive.
You could try rebooting the fog server though. After all, the server is still a computer, and while it’s not necessarily a normal requirement, rebooting might solve many of the problems you’re seeing.
Also, look at your network, if you have a 1Gbps network, but a switch is a 100/Mbps, the maximum capture/deploy across network to that machine would be limited to 100 Mbps (about 2.5GB/min) Where 1Gbps would give about 7.5GB/min. (This is for uncompressed data though.) The speed, is also (as stated earlier) limited to the hard drives of both the client and server machines. Most often I’ve found the slowdown is not the networks, rather they’re the hdd’s reading/writing from/to.
-
If you have some time, I’d like you to do some system benchmarking. Maybe we can find the source of your issues.
The first and easiest to test is local disk subsystem. From a linux command prompt on your fog server run these commands.
sudo dd if=/dev/zero of=/tmp/test1.img bs=1G count=1 oflag=direct
run it 3 times and average the output which should look something like this:
1+0 records in 1+0 records out 1073741824 bytes (1.1 GB) copied, 13.9599 s, 76.9 MB/s
Then run this command 3 times and average the output.
sudo echo 3 | tee /proc/sys/vm/drop_caches && time dd if=/tmp/test1.img of=/dev/null bs=8k
Post the results here.
And finally we need to remove the 1GB file we created.
sudo rm -f /tmp/test1.img
The next bit is network throughput. But lets see your disk speeds to start.
-
@george1421 I couldn’t get the 2nd part of the command line to work as I kept getting permission denied. I was able to use the built in benchmarking on the disks menu to come up with the screenshot below:
We also double-checked all of the switches last night and all seem to be set properly and rebooted the FOG server. I am going to try to capture a new image with the ZSTD compression instead of Gzip. On the downside, after changing compressions on the image and it starting out super high transfer rate yesterday, it still took over 8 hours to complete and the rate almost bottomed out by the time it was an hour in.
-
@hvaransky If you run
sudo su -
first then you should be able to run the commands without the sudo at all.It would be interesting from a bench marking standpoint to use the same tool to give us the same relative number.
But based on the benchmark screen, I would expect you have either a SATA SSD or a multi hard drive (>6) disk array. Maybe raid 10. So your slowness is probably not your disk subsystem. So the next steps are network testing.
-
Just going to add in here that when I want to see how busy my HDD’s are during tasks, I use nmon. Can run as normal user and press d to view disk activity percentage. Very friendly to a newb. Press n to also view network stats and c to view CPU as well.
If I was you, i’d bring a computer over to the same physical switch that your FOG server takes the image on. See if it sustains a better speed or not to determine if network routing is to blame or not.