Imaging computers at 2.6Gbps
-
@Tom-Elliott said in Imaging computers at 1.5Gbps:
I just have to say,
HOLY SHIT!
Ditto…
Of course me being a bit OCD, I have to know was this transfer rate done with FOG standard compression or the new (as of 1.3.4) zstd file compression? To get the best speeds from the zstd compression you need to capture and deploy using the zstd drivers. If this was done with the legacy data compressor, what was your compression index number?
The reason why I asked the 20 questions is that what you have is might impressive. If others want to duplicate your test, it would be good to know the conditions.
-
since we’re sharing…
i5 SSD client
zstd -11 compression -
@Junkhacker show off…
< really means George is envious of your tech >
-
@george1421 Standard FOG partclone gzip here. I may have to play with zstd. As for compression index, the image that I tested above is 153.6GB (confirmed in Windows as well) and on the FOG server is using 94GB, so I believe that would be 94/153.6 = 61.2% compression index (if that is the correct method for calculating compression index). Or perhaps it would be 100-61.2 = 38.8% compression index? At any rate, I’m surprised it compressed as well as it did because most of this user’s hard disk is comprised of compressed video (radio/television teacher).
With a newer client PC (M.2 drive) I attained 13GB/min, still short of Junkhacker’s 15.34GB/min. Wow. I will have to try it again after hours and see how much network traffic affected my earlier tests.
-
@Obi-Jon said in Imaging computers at 1.5Gbps:
Junkhacker’s 15.34GB/min
I’m pretty sure Junkhacker is now using zstd for capture and deployment. So that may explain the speed differences.
FWIW: The compression ratio I was talking about was the slider position on the image definition from the FOG management gui.
-
@george1421 Ah, I didn’t even notice that when I set up the test image. It is set to 6 out of 22 (the default I presume since I didn’t change it).
Since I didn’t bring any of my old images over from the old FOG server I may try zstd to see how it compares.
-
@george1421 by using zstd compression 11 i have managed to make my images ~26% smaller and deploy ~36% faster. of course, that’s with a normal image. one full of video may not do so well. mine starts at about 18-20GB/min, but that settles down to the speed in the picture after it gets about 5% done.
-
@Junkhacker Are you using SSDs on your server (and client I would assume)? Wow, smaller images and still faster, that’s pretty sweet.
-
@Obi-Jon it’s important to note that gzip (the default) only has legitimate values of 0-9. zstd has legitimate values of 0-19 (normal) 20-22 (ultra). i don’t have a computer that can use ultra settings without running out of ram and crashing, and i don’t think the extra gains from those settings would be worth it if i did.
-
@Junkhacker no SSD on the server, it’s running in a vm with a raid for storage
-
@Junkhacker Yeah, I was curious if SSD really helps much at all with straight throughput. I figured SSD might win out when pushing multiple images at once, which is what I plan to do. Should be interesting to see how it goes on several dozen simultaneous unicasts.
Good to know about the gzip values vs zstd. I’m testing a plain Win10 image now with zstd and your settings to see how it compares to gzip. Will test with the big image when I have more time.
-
@Obi-Jon you may want to use zstd -19 compression. that takes about 3x longer on upload for me, but it shaves another ~5% off the size of my images. it takes about the same time when deploying to a single machine, but that could make a big difference in the volume of network traffic with multiple unicasts
-
@Junkhacker Testing zstd -11 vs -19 now. Definitely slower uploading 19, but that wouldn’t matter much to me since I can start it and walk away. Does compression take place on the client or server? I know downloads decompress at the client obviously, but assuming compression also takes place on the client there would be little need for a fast CPU on the server. I’m only running an i3 on my server, used the money for memory, drive and nic instead.
-
@Obi-Jon compression or decompression is always handled on the client system.
-
@Obi-Jon All heavy lifting (computing) is done on the client. The server is only used to move the files between the network and storage and overall process management.
I’ve run fog on a raspberry Pi2 with pretty good results with a single unicast stream.
-
@george1421 Good to know.
Raspberry Pi2 FOG would have been sweet for carrying into remote 56k-connected offices where I used to work back in the early 2000s.
-
@Obi-Jon We use it for a mobile deployment server and a bit of a novelty. For small offices we typically use an i3 Intel NUC with an onboad SSD drive. Its pretty small, easy on power and has enough horse power to support multiple deployment streams.
-
OK, test results are in, and wow, zstd is fast, at least on my newest i3 systems with a vanilla Win10 installation (haven’t yet tested it with the original 157GB image I posted earlier). So, this isn’t really apples to apples when comparing with my earlier results, but solid comparison between zstd compression levels.
Vanilla Win10 SSD/6th gen i3/32GB ram system:
Image: 9,180MB uncompressed (converted from 8.55GiB as shown in FOG to MB)
zstd -11: 3,390MB compressed, avg upload 3.66GB/min, download time 27 seconds, peak 18.23GB/min
zstd -19: 3,068MB compressed, avg upload 725MB/min, download time 25 seconds, peak 19.75GB/minSo it looks like an speed improvement of about 7% and a space improvement of 9%, tradeoff being 400% increase in upload time. Worth it for some, not for others. Totally worth it for me to save server space and make deployments as quick as possible to reduce user downtime. I suspect these increases will vary depending on client hardware specs.
HOWEVER, I did see an “error 39 read premature end” at the very end of the download process for zstd -19 right before it rebooted. I didn’t notice if there was an error when uploading, but the error did cause FOG to repeat the image process until I killed it. However, Windows 10 booted fine and I don’t see any problems. I re-uploaded the image and compared disk usage and the post-error image is 10MB smaller, so I wouldn’t trust the image. This error may have been a fluke, will probably settle in the zfs -15 range if -19 continues to generate errors.
And for fun…
-
@Obi-Jon the error is most likely completely unrelated to the compression level. if you can capture more information, we’ll see if we can find the cause.
-
@Junkhacker “locate partclone.log” does not find a log file. Thinking a log was not generated, or am I looking for the wrong log file? I tried “locate *.log” but didn’t see anything promising:
/opt/fog/log/fogimagesize.log
/opt/fog/log/fogreplicator.log
/opt/fog/log/fogscheduler.log
/opt/fog/log/fogsnapinhash.log
/opt/fog/log/fogsnapinrep.log
/opt/fog/log/groupmanager.log
/opt/fog/log/multicast.log
/opt/fog/log/pinghost.log
/opt/fog/log/servicemaster.log
/root/fogproject/bin/error_logs/fog_error_1.3.5.log
/root/fogproject/bin/error_logs/foginstall.log
/var/log/alternatives.log
/var/log/auth.log
/var/log/bootstrap.log
/var/log/dpkg.log
/var/log/kern.log
/var/log/php7.1-fpm.log
/var/log/apache2/access.log
/var/log/apache2/error.log
/var/log/apache2/other_vhosts_access.log
/var/log/apt/history.log
/var/log/apt/term.log
/var/log/mysql/error.log
/var/log/unattended-upgrades/unattended-upgrades-shutdown.log