FOG Upload Imaging Process Extremely Slow
Fog Server is uploading super slow like 3.0GB/Min (50MB/s) and it’s running on a 1GB link which should be 125 MB per second realistically should be closer to 110MB but its still less than half the write speed.
Here is the disk write speed capacity:
sudo dd if=/dev/zero of=/tmp/test1.img bs=1G oflag=dsync
41+0 records in
40+0 records out
44004438016 bytes (44 GB) copied, 132.321 s, 333 MB/s
Disk write speed is capable of 333MB/s.
Below shows NIC with no errors
*IP’s & Mac address removed for security
ens160: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 0.0.0.0 netmask 255.255.255.0 broadcast 0.0.0.0
inet6 fe80: prefixlen 64 scopeid 0x20<link>
ether 00:00:00:00:00:00 txqueuelen 1000 (Ethernet)
RX packets 669392866 bytes 718072190517 (668.7 GiB)
RX errors 0 dropped 3357 overruns 0 frame 0
TX packets 157057852 bytes 517660242027 (482.1 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
Thoughts on why I’m only seeing 50MB/s write speeds?
@george1421 Thank you for the response and this would explain alot of the reason for slow upload times.
Currnetly backing up a Lenovo T470 with standard SSD drive which is currently using about 350GB of storage on the 500GB SSD drive. It’s taking about 2 hours. That being said we are using pretty much the default setup for Fog Server. So the Image Compression Format Default is PartClone Gzip.
While this was for a different issue here is some bench marking I did just a few days ago: https://forums.fogproject.org/topic/13396/unable-to-capture-windows-10-image/20
@Technolust Your upload times are about what I would expect. You have to know that the target computer does 90% of the work of image capture. The FOG server it self only moves data from the network adapter to the storage disk and then manages the whole process. The target computer reads the image from the local disk, compresses it and then forwards the data stream over the network to the fog server. That compression bit takes quite a bit of CPU. If you are using zstd for compression capturing the image is much slower than gzip. But you get the benefits of zstd on the deploy side with a faster image deploy than with gzip. So zstd is slower on capture because it optimizes and compresses for faster image deployment.
What target computer are you trying to capture? What is the processor and storage media? I’m suspecting a modern dual core system with a ssd or nvme disk to get 3GB/m capture rates.
3GB/m capture is realistic and noting to be concerned about. The same on image deployment, if you have a pure and well maintained 1GbE network then you should expect about 6.1GB/m deployment rates.
As long as we are talking about networking, remember that you can saturate a 1GbE by sending 3 simultaneous unicast images to your target computer. That 4th unicast stream we create a large penalty in network performance.