Hardware upgrades for server
-
@tesparza Those numbers are not bad. They are a little lower than expected, but more than adequate. With those numbers that disk subsystem can produce (theoretical max) of about 13GB/min. So your bottle neck is not with your disk subsystem.
The next step is to check the network stack. Ideally we’d like to get a computer connected to the same physical network switch as your FOG server.
-
@george1421 Okay Im gonna try to capture again, this time im directly connected to the same switch as the fog server. 1Gbps link
-
@tesparza Understand that capture rates will be different then deployment rates. On capture you take the penalty for faster deployments.
I haven’t had a chance to get back to this thread, but the next steps are to test the network stack. Here is the concepts: We will do that using a target computer connected to the same switch as your fog server. We will register the target computer then schedule a debug capture or deploy it doesn’t matter. Once we are at the command prompt we will manually mount the /images/dev share on the fog server and then use the iperf3 utility to measure bandwidth between the target computer and FOG. That will give us an idea of bandwidth availability. Then you will repeat the same process from the far end of your network to see if there is a difference.
The last bit is testing nfs performance. But lets see what the network tests tell us. I need to create clear instructions for the above to ensure we get the numbers we might expect.
-
@george1421
My core switches are 1gbps. And my classroom switch are 100mbps. That’s the bottle neck. Can’t do nothing about it until I get my upgrades later this year. -
@tesparza OK knowing that I might expect to see a ~700MB/min transfer rates using FOG.
So what can you do?
- Upgrade your network <smirk>
- Change your image compression from standard gzip to zstd and set the compression level to 11-15. zstd is a newer, tigher and faster decompression tool than gzip. The tighter you can pack the image the easier it will be on bandwidth. But its also a sliding scale, the tigher you pack the data the more CPU usage you will have on the client during image decompression.
We don’t have any base line numbers using 100Mb/s networking so you will have to find the right fit for your setup. In your case a faster server won’t help. If you have multiple computers to image at the same time, I would try to multicast the image to several at a time. Because you don’t have speed, you have to manage the quantity.
-
@george1421 Yeah i just tried a deployment on the switch where the fog server is connected and im getting 8GB/min
I just gonna have to wait for the upgrade that we are getting, for now I’m gonna just register the host and use the snapins. Multicast and cloning will have to wait until later this year. Thank you so much guys -
@tesparza On the plus side you know that your server is fine and up to the job. No expense there needed.
-
I use a VM as my central fog server.
The storage ‘nodes’ are NAS’ (Usually FreeNAS) so if you have Synology, FreeNAS or Qnap you could just set those up as where the data actually moves from.
My FOG Server is over the other side of an IPSEC VPN so I cannot pull images directly from it.
-
@george1421 zstd decompression speed is supposed to stay more or less the same whatever the compression level. Only compression becomes slower.
-
@compman while that’s true, in practice zstd at a higher compression level will return a faster overall process when the bottleneck is the network.