@entr0py You didn’t happen to mention if the FOG server is now virtualized or physical.
In doing some benchmark testing back in the day, I was able to saturate a 1GbE link with 3 simultaneous unicast imaging. While you talk about 10 and 40 GbE, this point may not be relevant, but 3 is when things start falling down in your environment. Its not the solution but just one data point.
During imaging the FOG server doesn’t require much CPU. Its only function is to monitor the imaging process and move files from the storage subsystem to the network adapter. All of the heavy lifting during imaging is happening at the target computer. Heck I can run FOG on a raspberry pi 3 and image at almost a normal speed (one unicast image only). So I’m just saying, having a FOG server with gobs of RAM and 128 processors won’t really speed up the imaging process. It will help with multiple concurrent unicast imaging but it won’t make the process faster.
So there are two areas I would look into
- Disk subsystem
- Network performance.
I have this article from a long time ago that will give you commands to test your FOG server. https://forums.fogproject.org/topic/10459/can-you-make-fog-imaging-go-fast?_=1691688342623
If you put one of the target computers into debug mode you can startup the iperf3 (built into FOS Linux) as a server iper3 -s and then from the fog server run the performance tests lets see how fast your fog server can go.