Imaging computers at 2.6Gbps
-
@Obi-Jon it’s important to note that gzip (the default) only has legitimate values of 0-9. zstd has legitimate values of 0-19 (normal) 20-22 (ultra). i don’t have a computer that can use ultra settings without running out of ram and crashing, and i don’t think the extra gains from those settings would be worth it if i did.
-
@Junkhacker no SSD on the server, it’s running in a vm with a raid for storage
-
@Junkhacker Yeah, I was curious if SSD really helps much at all with straight throughput. I figured SSD might win out when pushing multiple images at once, which is what I plan to do. Should be interesting to see how it goes on several dozen simultaneous unicasts.
Good to know about the gzip values vs zstd. I’m testing a plain Win10 image now with zstd and your settings to see how it compares to gzip. Will test with the big image when I have more time.
-
@Obi-Jon you may want to use zstd -19 compression. that takes about 3x longer on upload for me, but it shaves another ~5% off the size of my images. it takes about the same time when deploying to a single machine, but that could make a big difference in the volume of network traffic with multiple unicasts
-
@Junkhacker Testing zstd -11 vs -19 now. Definitely slower uploading 19, but that wouldn’t matter much to me since I can start it and walk away. Does compression take place on the client or server? I know downloads decompress at the client obviously, but assuming compression also takes place on the client there would be little need for a fast CPU on the server. I’m only running an i3 on my server, used the money for memory, drive and nic instead.
-
@Obi-Jon compression or decompression is always handled on the client system.
-
@Obi-Jon All heavy lifting (computing) is done on the client. The server is only used to move the files between the network and storage and overall process management.
I’ve run fog on a raspberry Pi2 with pretty good results with a single unicast stream.
-
@george1421 Good to know.
Raspberry Pi2 FOG would have been sweet for carrying into remote 56k-connected offices where I used to work back in the early 2000s.
-
@Obi-Jon We use it for a mobile deployment server and a bit of a novelty. For small offices we typically use an i3 Intel NUC with an onboad SSD drive. Its pretty small, easy on power and has enough horse power to support multiple deployment streams.
-
OK, test results are in, and wow, zstd is fast, at least on my newest i3 systems with a vanilla Win10 installation (haven’t yet tested it with the original 157GB image I posted earlier). So, this isn’t really apples to apples when comparing with my earlier results, but solid comparison between zstd compression levels.
Vanilla Win10 SSD/6th gen i3/32GB ram system:
Image: 9,180MB uncompressed (converted from 8.55GiB as shown in FOG to MB)
zstd -11: 3,390MB compressed, avg upload 3.66GB/min, download time 27 seconds, peak 18.23GB/min
zstd -19: 3,068MB compressed, avg upload 725MB/min, download time 25 seconds, peak 19.75GB/minSo it looks like an speed improvement of about 7% and a space improvement of 9%, tradeoff being 400% increase in upload time. Worth it for some, not for others. Totally worth it for me to save server space and make deployments as quick as possible to reduce user downtime. I suspect these increases will vary depending on client hardware specs.
HOWEVER, I did see an “error 39 read premature end” at the very end of the download process for zstd -19 right before it rebooted. I didn’t notice if there was an error when uploading, but the error did cause FOG to repeat the image process until I killed it. However, Windows 10 booted fine and I don’t see any problems. I re-uploaded the image and compared disk usage and the post-error image is 10MB smaller, so I wouldn’t trust the image. This error may have been a fluke, will probably settle in the zfs -15 range if -19 continues to generate errors.
And for fun…
-
@Obi-Jon the error is most likely completely unrelated to the compression level. if you can capture more information, we’ll see if we can find the cause.
-
@Junkhacker “locate partclone.log” does not find a log file. Thinking a log was not generated, or am I looking for the wrong log file? I tried “locate *.log” but didn’t see anything promising:
/opt/fog/log/fogimagesize.log
/opt/fog/log/fogreplicator.log
/opt/fog/log/fogscheduler.log
/opt/fog/log/fogsnapinhash.log
/opt/fog/log/fogsnapinrep.log
/opt/fog/log/groupmanager.log
/opt/fog/log/multicast.log
/opt/fog/log/pinghost.log
/opt/fog/log/servicemaster.log
/root/fogproject/bin/error_logs/fog_error_1.3.5.log
/root/fogproject/bin/error_logs/foginstall.log
/var/log/alternatives.log
/var/log/auth.log
/var/log/bootstrap.log
/var/log/dpkg.log
/var/log/kern.log
/var/log/php7.1-fpm.log
/var/log/apache2/access.log
/var/log/apache2/error.log
/var/log/apache2/other_vhosts_access.log
/var/log/apt/history.log
/var/log/apt/term.log
/var/log/mysql/error.log
/var/log/unattended-upgrades/unattended-upgrades-shutdown.log -
@Obi-Jon Partclone.log is related to the client.
Anything presented on the client in regards to FOS (Inits) will be in respect to the client.
You could add isdebug=yes to that host’s Kernel Arguments field. This will drop the host into a terminal prompt where we can step through and obtain more information.
-
@Tom-Elliott Ah, I borked the log then when I uploaded the image and redownloaded after the error to compare image file sizes. No errors the second time since the images matched. If I see that error again I’ll revisit this. Thanks!
-
@Obi-Jon I feel I must add, once again.
“Holy Shit”
-
@Tom-Elliott Lol, my sentiments as well, been pumped all day. Can’t wait to try this over multiple unicasts simultaneously.
-
@Obi-Jon That’s great to see. We see very similar speeds with ours, though it’s just a VM on ZFS based storage with 4g of ram and 2 cores. I think the key is making sure you’ve got solid clients, and ssds on the client side. And Intel nics.
Some of our broadcom equipped thinkpads max out at around 7, the intel Dells hit the 10,11 mark.
This is all on gzip. Haven’t made a new image since the new compression.
-
@Bob-Henderson Good point on the Intel nics, that’s what I’m using as well. My new clients are based on H110T chipset, which has both Intel and Realtek nics. I made sure to only enable PXE on the Intel nic and disabled the Realtek nic entirely. Now to keep my users from plugging into the wrong jack and generating more help desk tickets. I’m actually contemplating gluing a punched down RJ45 plug into the Realtek ports, lol.
-
@Obi-Jon eww ya, no, don’t do that.
Color code the jacks and cables. blue jack, blue cable, orange jack, orange cable. It’s what we do for our users. Another school here, but we’re a 1:1 shop so all my stuff are laptops. Snapins are my saving grace, tied with PDQ Deploy
-
@Bob-Henderson That’s a really good idea. Gives me ideas for audio/video stuff that gets plugged in wrong every time they wax the floors…