FOG Unresponsive under "heavy" load
-
@librarymark OK for the gateway timeout lets work with that. I think if you look in the apache error file. You will see a php timeout waiting for php-fpm to respond. What we need to do is tell apache to wait a bit before timing out.
About how long does it take to push out your image to 20 computers?
-
@george1421
I just upped the memory in/etc/php/7.1/fpm/pool.d/www.conf:php_admin_value[memory_limit] = 256M
-
@librarymark What I want you to test is outlined in this post: https://forums.fogproject.org/topic/11713/503-service-unavailable-error/40
I want you to update this section
<Proxy "fcgi://127.0.0.1:9000"> ProxySet timeout=500 </Proxy>
Set the timeout in seconds to be just a bit longer than your push time.
-
Where do I find the “push time”?
I edited the file is /etc/apache2/sites-enabled/001-fog.conf, and it now looks like this:
<VirtualHost *:80> <Proxy "fcgi://127.0.0.1:9000"> ProxySet timeout=300 </Proxy> <FilesMatch "\.php$"> SetHandler "proxy:fcgi://127.0.0.1:9000/" </FilesMatch> KeepAlive Off ServerName 10.5.0.61 DocumentRoot /var/www/html/ <Directory /var/www/html/fog/> DirectoryIndex index.php index.html index.htm </Directory> RewriteEngine On RewriteCond %{REQUEST_METHOD} ^(TRACE|TRACK) RewriteRule .* - [F] RewriteCond %{DOCUMENT_ROOT}/%{REQUEST_FILENAME} !-f RewriteCond %{DOCUMENT_ROOT}/%{REQUEST_FILENAME} !-d RewriteRule ^/fog/(.*)$ /fog/api/index.php [QSA,L] </VirtualHost>
Is that correct? In any case I will not be able to test now because we just opened (public library). It might be a few days.
-
@librarymark Right that looks good. Make sure you set the timeout to the right number of seconds. Right now as configured apache will wait 5 minutes for php-fpm to respond before giving up. If your image push time is more than 5 minutes you need to adjust this number.
[edit] Sorry, I was not clear “push time” is the time it takes to send the image to all 20 computers when using a multicast image.
-
@george1421
My multicast sessions usually take about 5-7 minutes to complete. Is that what you mean? -
@librarymark yes that is what I mean. So change that 300 to 500. Understand this is only a test to see if that IS what is causing your multicast failure. If this proves out we will need to get back with the developers to understand why?
-
@george1421
How many versions back do we need to go to bypass this? I think I want to step back in time until the bugs are worked out. Or - what older version of Ubuntu would work better? -
@librarymark Well if its a php-fpm issue, you can just turn it off and bypass the issue.
The down side is if we don’t have different use cases helping us to understand what is going wrong it will never get fixed. But if you want to disable php-fpm I can tell you how its just adding one line to the fog.conf file for apache.
-
@george1421
I am curious what php-fpm gains us. I have never had performance issues before. -
@librarymark Performance of the web ui as well as client checkins. php-fpm is a dedicated php engine, where before the fog developers were using the php engine built into apache. Moving it external allows apache to do more things (like service web page requests) while having a dedicated php engine do what it does the best.
The downside for the performance the rules are stricter for the optimization to work.
-
I’m having the same exact issues ever since updating to 1.5.4.
server: 14.04 LTS ubuntu.
I had 1.5.0 rc10 and it was running smooth. I only updated because I was having issues with 1709 not restarting after getting a host name. I needed .10.16 client and the latest 1.5.3 had it.
Before, with 10 slots open, i could image all 10 computers with an average of 1.5gbp/min. now it’s barely holding at 200mbp/min on each client. the entire web GUI is unresponsive. I also have some computers queued up in line waiting for a slot to open and it would time out mid way.
I have a server node and updated both the main server and node to 1.5.4 and ever since, it’s been busy replicating over and over.
Also the PXE boot bz img part takes much longer to do. I’m going to go back to 1.5.0 rc10
how would I go about downgrading?
-
@nrg Well if you are willing to help us understand what is going on here I’d like to help keep you on 1.5.4.
I want you to do 2 things listed in this thread.
#1 https://forums.fogproject.org/topic/12057/fog-unresponsive-under-heavy-load/5
We need to add in the memory limit of 256M
#2 https://forums.fogproject.org/topic/12057/fog-unresponsive-under-heavy-load/16
Add in the timeout value into your apache configuration. set the timeout value to 500
Then just reboot your fog server after the changes.
-
@george1421 will try it. but I think it might be an issue with my node. Going to rebuild my node (or disable it) to stop it from replicating over and over. Before 1.5.4 update, my node was stable with hard drive use of 70%. Now I can see it constantly deleting images because it says it’s not matching. it would go from 20%-40% hard drive usage use so it’s cycling through images that it think doesn’t match.
will report back. -
@nrg Is your storage node a full fog server at the same version as the master node?
-
@george1421 Yes, it’s an exact identical hardware/software as my master fog server. I disabled the node server and there’s still an unresponsive issue with the main server. the repeated replicating was giving the main server a load time of 10. After shutting it off and disabling it, the average load of the main server was normal. Then I tried imaging up to 10 computers and up to the 7th computer, the web GUI was unresponsive. The waiting in line slot message on the client computer stopped responding. I believe there’s a huge issue with this release.
will try the php tweaks then if that doesn’t work. will figure out a way to go back to 1.5.0 rc10. redo the node fresh.
edit:
i edit the memory to 256 but could not find “/etc/httpd/conf.d/fog.conf” anywhere to edit. -
@nrg Depending on the distro you are using (that point would also be helpful to know) it may be in this file /etc/apache2/sites-enabled/001-fog.conf
If in doubt you can always search for the file.
cd /etc grep -R -e ":9000" *
That should tell you the name of the file. If you are using debian 9.x we found an issue where php-fpm may not be enabled. Which would cause a very poor performance with the web gui too.
FWIW: If you roll it back to anything I would go to 1.5.0 stable and not RC10.
-
i went back to 1.5.0 final release. so far it has fixed my issue. imaging is back to normal. node is now copying things over and wont know until tomorrow if it fixed the cycle of replicating. imaging 10 clients right now and the web gui is smooth. iPxe boot is back to normal too. the command line screen during imaging doesn’t sit at deleting mbr/gpt anymore. new feature??
current Load Average 18.16, 10.60, 4.86
with 1.5.4, loads were in the high 19 across all three. constantly staying around 9 while nothing was imaging.
will wait next year for 1.8.0 =D. anyways thanks.!
-
@nrg This really sounds like a mix up of FOG Versions. This happens from time to time just due to how the approach to install has changed from Ubuntu Versions. Now I basically create a link from
/var/www/fog/
to/var/www/html/fog
, but on occasion the two don’t work out as expected. PHP-FPM should actually make your Load averages much lower than 19/18. I doubt the problem you were seeing was directly due to php-fpm, rather a mixture of connections happening at the same time. If at all possible, maybe I can remote with you sometime this week and we can look at this together. Thursday i have stuff happening from 3:30 pm -> around 5:00 pm EDT, but I’m pretty much free any other time. -
Would it be safe to simply add something like (numbers for illustrative purposes only)
TimeOut 7200 ProxyTimeout 7200
To apache config? Given that PHP-FPM handles its own processes anyway it seems like the default Apache timeouts are far too restrictive and frankly pointless, at least towards PHP-FPM itself.
Unfortunately I don’t really have a good way of testing it currently, not too much to image currently and my environment is small scale anyway, so not a great testing ground for scalability.