Fog is Pinging CPU when Imaging
-
Hey All,
I am a fog noob and wanted some advice how how to trouble shoot Fog pinging my CPU when imaging 15 computers. I am not sure if I should be looking at the fog software, the Ubuntu OS, or networking issues. Ubuntu is up-to-date and Fog is on the latest version. One of my colleagues suggested that fog might be encrypting the files which would account for the high CPU usage, is this true?
I appreciate any advice you can give, thanks!
-
I’m assuming you have some sort of monitoring software that’s telling you something along the lines of CPU Usage is high for you FOG server? Is this what you mean by “Pinging” CPU?
When deploying an image to a machine, this is, as far as I can tell, perfectly understandable. When the image is being deployed, the NFS is mounted to the local host (the one receiving the image) and is being decompressed. This, by itself, is perfectly fine, but the FOG server still maintains the connection for NFS.
The more systems you have imaging at the same time, the more connections have to be maintained to the FOG server. This may require some CPU usage, especially if you have your server on a VM or a minimal system.
-
Thanks for the response! And yes the Fog server is currently running on a VM. Is there any compression going on? (If so can I disable this?) It just seems that such a high CPU usages isn’t justified simply by mounting NFS partitions for imaging? Currently each machine that is getting imaged is mounting a NFSD partitions consuming 25% of the CPU, when I do a top command on the Fog server.
-
I might suggest adding more CPU to the VM. VM’s from a power base don’t make the greatest of Server’s especially on the scale that FOG can provide. Most FOG servers for imaging in the term’s you’re suggesting is usually a physical, albeit seperate, server due to the amount of work it provides.
Yes you could turn of compression, but then you’d be using a lot more space.
-
Wow thanks for the quick response. I have plenty of space so not too worried about space and I am dedicating a lot of resources currently to the VM. So what your saying is the server will use whatever it can get its hands on if I give it more CPU resources? I feel like it I will just max out more. Currently imaging 8 machines is spiking my CPU (2 cpus at 3.2Ghz Xeon). What might you suggest be the proper resources to accommodate this?
How would I turn off compression? Btw.
-
Turning off compression can be pretty difficult as you’d be modifying the init.gz script files to stop having it compress, and uncompress the files.
I’ve never had to play with this, but there are a lot of areas in which you’d need to make the modifications. Mind you this wouldn’t affect already uploaded images, and may even make them unusable at that point as you’re no longer having anything that will decompress the files.
I’d recommend giving the system just a few more resources to peck at. Maybe another CPU (3 total if I’m understanding this correct) and maybe limit the amount of hosts able to image at the same time. Defaults usually sit at 10 systems, but you can adjust this under Storage management.
Then play with the number of systems to image at the same time. Maybe lower the numbers. Don’t force systems to image either as it defeats the purpose of the limits you’ve set.
You can, relatively safely, use the VM as that’s how I do all of my testing right now.
Just expect to get the “pinging” of the CPU as it is working pretty hard to keep up. It shouldn’t maintain that level of load for extremely long. I would even, possibly, recommend not having to watch the load average of that CPU as it should be expected, during heavy imaging times, that it is going to be used rather heavily. I even see these on our FOG Dedicated server (4 Xeon 3.2GHz 32GB ram).