Low transfer rate to capture and deploy
-
We had a FOG server in VMware that delivered 9 Gb/m but we need to migrate to VM and now the transfer rate is no more than 3 Gb/m, even connecting the direct host to the server does not increase the transfer rate.
System:
Fog 1.5.4
VirtualBox 5.2.20
Debian GNU / Linux 9 (stretch)CPU Type GenuineIntel
CPU Count 4
CPU Model Intel Xeon CPU E3-1220 v3 @ 3.10GHz
Total memory 6.07 GiBIntel Pro / 1000T NIC, HP 1Gb Ethernet
I already changed from GZIP to ZSTD and had no effect.
Edit: I already tested with IPerf and gave 1Gbps
-
Well let me see if I understand this. Your fog server was running in vmware (ESXi ??) and you were getting 9GB/m. This is a reasonable results from FOG running under vsphere where the vmware server is running across a 10GbE network and a recent (current) target computer is connected via a 1GbE network connection. In my environment I get about 12GB/m transfer.
FWIW on a pure 1GbE FOG infrastructure I would expect to see about 6.1GB/m transfer rates on a well designed network.
Now you have moved your FOG server to virtualbox and you see 3GB/m transfer rates? Do I understand that correctly?
If so, I can understand why. ESXi is a type 1 hypervisor (i.e. the hypervisor runs directly on the hardware) and virtualbox is a type 2 hypervisor (i.e. the hypervisor runs on top of a guest operating system). FOG has to complete with the host OS running the hypervisor for system resources.
Also that CPU is a 4 core 4 thread processor. If you are only running 1 vm client on that server then you are probably OK granting the vm client computer 4 vCPUs, but you could end up with resource contention if that virtualbox server is loaded. The FOG server itself doesn’t really work hard or require much CPU. In the deployment process the FOG server manages the deployment and copies the files from the local hard drive and sends them out the network adapter. The taret computer does all of the heavy lifting of image deployment.
Zstd will give you a better deployment time than gzip, at the cost of increased capture time. But given that you typically capture once per image and deploy many times. An ideal compression for zstd is 11. You can go higher or lower. A compression of 0 “should” give you a faster write speeds on the target computer at the penalty of image size on the server and network load. Remember, if you are changing compression protocols or compression ratios, that must be done before you capture the image. You can change these setting post capture, but they will not have any impact on the image already captured or deploying the current image.
The numbers you see in partclone as its running (3GB/m) is actually a composite speed. Its the combination of disk reads on the server, nfs protocol, tcpip data stream, gzip or zstd file expansion on the target computer and write speed to the target computer’s hard drive. Its an end to end transfer speed, not specifically related to network performance (but network performance is a part of the total).
-
Thanks for the informations! My network is GLAN certified end to end, but I realized that I got 6Gb/m on the sda4 partition, while I get miserable 3Gb/m on the sda2 partition, any more ideas to help me? Picture below.
image url)
-
@Titione Possibly doing some defragmentation might speed up on sda2?!?
-
@Sebastian-Roth I already did defrag but without success. Thx
-
@Titione Well then I would go and check for Bad sectors on that disk.