Minimum Server Specs
What are the minimum CPU, RAM, and Storage specs that fog can operate on? What is the recommended specs for handling unicast images on up to 100-200 computers with a queue size of 6-12 PCs at a time?
unicast images on up to 100-200 computers
Is there any good reason you don’t want to use multicast? That would take a huge load off the disk subsystem and network of your FOG server!
DJslimD1k last edited by
@george1421 That is some good information! Thank you!
The recommendations are based on your campus size (up to 200) and/if you have the fog client installed on the target computers.
If you are purely imaging then min 2 vCPU with 4GB of ram if you are using a text only install of linux on the FOG host server and 6GB if you are running xwindows (GUI interface) on the FOG server.
Ideally you would want a 10GbE interlace if you are planning on imaging 12 systems at a time. Using modern target computers you can saturate a 1 GbE link with imaging 3 simultaneous systems. You can setup a multiple 1GbE LAG link to spread the load across all links. For 12 simultaneous images you will want a 4 link LAG (802.3ad) group
Your disk subsystem is going to be the most critical factor in your FOG server design. Consider imaging 5 systems at a time using a single spindle HDD. That HDD head is going to be seeking all over the platter to keep up feeding those 5 systems with data. Moving to a multiple spindle disk array will help but if you are really trying to image 12 systems at a time you will need a flash disk array to keep up with the target computers.
Consider for imaging alone there is not much workload on the FOG host server. All its doing during imaging is moving data between the disk subsystem and the network subsystem. There is no computational load on the FOG server other than moving data between storage and network as well as monitoring the overall imaging process. All of the heavy lifting for imaging is being done on the target computers. You don’t need a monstrous CPU with 1000 cores, it simply will not help the imaging process directly. All you should care about is an efficient disk and network subsystems.
Now enter the FOG service running on the client computers. This service checks in with the FOG server every X minutes to see if there are any tasks that need to be processed. This check-in takes CPU time on the FOG server. So the number of target computers running the FOG client will have a greater impact on FOG host resources and imaging does. There is some finetuning you can do to help with performance losses due to the fog client service. Understand I’m not saying the fog service is evil, just you need to be mindful of it and make a few system tweaks accordingly.
DJslimD1k last edited by DJslimD1k
We gave our server 32 GBs of Ram, 6 core Xeon processor, 10 GB Ethernet link, and 1 TB of SSD storage. It transfer at about 15 GB per minute while imaging 15 computers. The server specs are important, but having a good network to transfer the images on is just as important.