FOG Unresponsive under "heavy" load
-
@librarymark yes that is what I mean. So change that 300 to 500. Understand this is only a test to see if that IS what is causing your multicast failure. If this proves out we will need to get back with the developers to understand why?
-
@george1421
How many versions back do we need to go to bypass this? I think I want to step back in time until the bugs are worked out. Or - what older version of Ubuntu would work better? -
@librarymark Well if its a php-fpm issue, you can just turn it off and bypass the issue.
The down side is if we don’t have different use cases helping us to understand what is going wrong it will never get fixed. But if you want to disable php-fpm I can tell you how its just adding one line to the fog.conf file for apache.
-
@george1421
I am curious what php-fpm gains us. I have never had performance issues before. -
@librarymark Performance of the web ui as well as client checkins. php-fpm is a dedicated php engine, where before the fog developers were using the php engine built into apache. Moving it external allows apache to do more things (like service web page requests) while having a dedicated php engine do what it does the best.
The downside for the performance the rules are stricter for the optimization to work.
-
I’m having the same exact issues ever since updating to 1.5.4.
server: 14.04 LTS ubuntu.
I had 1.5.0 rc10 and it was running smooth. I only updated because I was having issues with 1709 not restarting after getting a host name. I needed .10.16 client and the latest 1.5.3 had it.
Before, with 10 slots open, i could image all 10 computers with an average of 1.5gbp/min. now it’s barely holding at 200mbp/min on each client. the entire web GUI is unresponsive. I also have some computers queued up in line waiting for a slot to open and it would time out mid way.
I have a server node and updated both the main server and node to 1.5.4 and ever since, it’s been busy replicating over and over.
Also the PXE boot bz img part takes much longer to do. I’m going to go back to 1.5.0 rc10
how would I go about downgrading?
-
@nrg Well if you are willing to help us understand what is going on here I’d like to help keep you on 1.5.4.
I want you to do 2 things listed in this thread.
#1 https://forums.fogproject.org/topic/12057/fog-unresponsive-under-heavy-load/5
We need to add in the memory limit of 256M
#2 https://forums.fogproject.org/topic/12057/fog-unresponsive-under-heavy-load/16
Add in the timeout value into your apache configuration. set the timeout value to 500
Then just reboot your fog server after the changes.
-
@george1421 will try it. but I think it might be an issue with my node. Going to rebuild my node (or disable it) to stop it from replicating over and over. Before 1.5.4 update, my node was stable with hard drive use of 70%. Now I can see it constantly deleting images because it says it’s not matching. it would go from 20%-40% hard drive usage use so it’s cycling through images that it think doesn’t match.
will report back. -
@nrg Is your storage node a full fog server at the same version as the master node?
-
@george1421 Yes, it’s an exact identical hardware/software as my master fog server. I disabled the node server and there’s still an unresponsive issue with the main server. the repeated replicating was giving the main server a load time of 10. After shutting it off and disabling it, the average load of the main server was normal. Then I tried imaging up to 10 computers and up to the 7th computer, the web GUI was unresponsive. The waiting in line slot message on the client computer stopped responding. I believe there’s a huge issue with this release.
will try the php tweaks then if that doesn’t work. will figure out a way to go back to 1.5.0 rc10. redo the node fresh.
edit:
i edit the memory to 256 but could not find “/etc/httpd/conf.d/fog.conf” anywhere to edit. -
@nrg Depending on the distro you are using (that point would also be helpful to know) it may be in this file /etc/apache2/sites-enabled/001-fog.conf
If in doubt you can always search for the file.
cd /etc grep -R -e ":9000" *
That should tell you the name of the file. If you are using debian 9.x we found an issue where php-fpm may not be enabled. Which would cause a very poor performance with the web gui too.
FWIW: If you roll it back to anything I would go to 1.5.0 stable and not RC10.
-
i went back to 1.5.0 final release. so far it has fixed my issue. imaging is back to normal. node is now copying things over and wont know until tomorrow if it fixed the cycle of replicating. imaging 10 clients right now and the web gui is smooth. iPxe boot is back to normal too. the command line screen during imaging doesn’t sit at deleting mbr/gpt anymore. new feature??
current Load Average 18.16, 10.60, 4.86
with 1.5.4, loads were in the high 19 across all three. constantly staying around 9 while nothing was imaging.
will wait next year for 1.8.0 =D. anyways thanks.!
-
@nrg This really sounds like a mix up of FOG Versions. This happens from time to time just due to how the approach to install has changed from Ubuntu Versions. Now I basically create a link from
/var/www/fog/
to/var/www/html/fog
, but on occasion the two don’t work out as expected. PHP-FPM should actually make your Load averages much lower than 19/18. I doubt the problem you were seeing was directly due to php-fpm, rather a mixture of connections happening at the same time. If at all possible, maybe I can remote with you sometime this week and we can look at this together. Thursday i have stuff happening from 3:30 pm -> around 5:00 pm EDT, but I’m pretty much free any other time. -
Would it be safe to simply add something like (numbers for illustrative purposes only)
TimeOut 7200 ProxyTimeout 7200
To apache config? Given that PHP-FPM handles its own processes anyway it seems like the default Apache timeouts are far too restrictive and frankly pointless, at least towards PHP-FPM itself.
Unfortunately I don’t really have a good way of testing it currently, not too much to image currently and my environment is small scale anyway, so not a great testing ground for scalability.
-
I have rolled back to Ubuntu 14.04 and FOG 1.3.5. It works with no issues whatsoever for me.
Sorry I can’t help troubleshoot but I have very little time to work with to get this running.
-
@tom-elliott not sure how the fog version got mixed… I just ran the git pull and rerun the installfog.sh in command to update to latest in dev-branch.
maybe something in 1.5.0 rc10 that I had that screwed it up going to 1.5.4??
btw, I’ve been running 1.4.4 and been updating in command line ever since.coming in this morning, the replicator is looking normal. the node usage pie chart is normal compare to the main server chart. before it would show 20-40% as it was constantly deleting and replicating. replicator logs are saying No need to sync which is good.
Interesting to see others in here experienced the same issue updating to 1.5.4
I appreciate the offer but I have to get these computer labs up and running soon. Love the support and fog has been a life saver.