FOG Unresponsive under "heavy" load
-
@librarymark
And after I reboot the server and the multicast actually runs, the PC’s are stuck at this:
and FOG’s webpage says this:
-
@librarymark Do you get php memory exhaustion in the logs?
Would also be interested in seeing your
free -m
andtop
(shift+m) stats when this happens. -
@librarymark said in FOG Unresponsive under "heavy" load:
Edit: Well, I take that back (a little bit). After a 20-pc multicast session, none of the PC’s were able to ‘update the database’. I had to cancel the session, reboot the fog server, and reboot the PC’s . At least the image was successfully blasted out otherwise I would be having a bad day right about now.
It would be interesting to know the memory usage when this broke.
Also just for clarity what updates did you do to the www.conf file, up the memory to 256MB?
-
@librarymark OK for the gateway timeout lets work with that. I think if you look in the apache error file. You will see a php timeout waiting for php-fpm to respond. What we need to do is tell apache to wait a bit before timing out.
About how long does it take to push out your image to 20 computers?
-
@george1421
I just upped the memory in/etc/php/7.1/fpm/pool.d/www.conf:php_admin_value[memory_limit] = 256M
-
@librarymark What I want you to test is outlined in this post: https://forums.fogproject.org/topic/11713/503-service-unavailable-error/40
I want you to update this section
<Proxy "fcgi://127.0.0.1:9000"> ProxySet timeout=500 </Proxy>
Set the timeout in seconds to be just a bit longer than your push time.
-
Where do I find the “push time”?
I edited the file is /etc/apache2/sites-enabled/001-fog.conf, and it now looks like this:
<VirtualHost *:80> <Proxy "fcgi://127.0.0.1:9000"> ProxySet timeout=300 </Proxy> <FilesMatch "\.php$"> SetHandler "proxy:fcgi://127.0.0.1:9000/" </FilesMatch> KeepAlive Off ServerName 10.5.0.61 DocumentRoot /var/www/html/ <Directory /var/www/html/fog/> DirectoryIndex index.php index.html index.htm </Directory> RewriteEngine On RewriteCond %{REQUEST_METHOD} ^(TRACE|TRACK) RewriteRule .* - [F] RewriteCond %{DOCUMENT_ROOT}/%{REQUEST_FILENAME} !-f RewriteCond %{DOCUMENT_ROOT}/%{REQUEST_FILENAME} !-d RewriteRule ^/fog/(.*)$ /fog/api/index.php [QSA,L] </VirtualHost>
Is that correct? In any case I will not be able to test now because we just opened (public library). It might be a few days.
-
@librarymark Right that looks good. Make sure you set the timeout to the right number of seconds. Right now as configured apache will wait 5 minutes for php-fpm to respond before giving up. If your image push time is more than 5 minutes you need to adjust this number.
[edit] Sorry, I was not clear “push time” is the time it takes to send the image to all 20 computers when using a multicast image.
-
@george1421
My multicast sessions usually take about 5-7 minutes to complete. Is that what you mean? -
@librarymark yes that is what I mean. So change that 300 to 500. Understand this is only a test to see if that IS what is causing your multicast failure. If this proves out we will need to get back with the developers to understand why?
-
@george1421
How many versions back do we need to go to bypass this? I think I want to step back in time until the bugs are worked out. Or - what older version of Ubuntu would work better? -
@librarymark Well if its a php-fpm issue, you can just turn it off and bypass the issue.
The down side is if we don’t have different use cases helping us to understand what is going wrong it will never get fixed. But if you want to disable php-fpm I can tell you how its just adding one line to the fog.conf file for apache.
-
@george1421
I am curious what php-fpm gains us. I have never had performance issues before. -
@librarymark Performance of the web ui as well as client checkins. php-fpm is a dedicated php engine, where before the fog developers were using the php engine built into apache. Moving it external allows apache to do more things (like service web page requests) while having a dedicated php engine do what it does the best.
The downside for the performance the rules are stricter for the optimization to work.
-
I’m having the same exact issues ever since updating to 1.5.4.
server: 14.04 LTS ubuntu.
I had 1.5.0 rc10 and it was running smooth. I only updated because I was having issues with 1709 not restarting after getting a host name. I needed .10.16 client and the latest 1.5.3 had it.
Before, with 10 slots open, i could image all 10 computers with an average of 1.5gbp/min. now it’s barely holding at 200mbp/min on each client. the entire web GUI is unresponsive. I also have some computers queued up in line waiting for a slot to open and it would time out mid way.
I have a server node and updated both the main server and node to 1.5.4 and ever since, it’s been busy replicating over and over.
Also the PXE boot bz img part takes much longer to do. I’m going to go back to 1.5.0 rc10
how would I go about downgrading?
-
@nrg Well if you are willing to help us understand what is going on here I’d like to help keep you on 1.5.4.
I want you to do 2 things listed in this thread.
#1 https://forums.fogproject.org/topic/12057/fog-unresponsive-under-heavy-load/5
We need to add in the memory limit of 256M
#2 https://forums.fogproject.org/topic/12057/fog-unresponsive-under-heavy-load/16
Add in the timeout value into your apache configuration. set the timeout value to 500
Then just reboot your fog server after the changes.
-
@george1421 will try it. but I think it might be an issue with my node. Going to rebuild my node (or disable it) to stop it from replicating over and over. Before 1.5.4 update, my node was stable with hard drive use of 70%. Now I can see it constantly deleting images because it says it’s not matching. it would go from 20%-40% hard drive usage use so it’s cycling through images that it think doesn’t match.
will report back. -
@nrg Is your storage node a full fog server at the same version as the master node?
-
@george1421 Yes, it’s an exact identical hardware/software as my master fog server. I disabled the node server and there’s still an unresponsive issue with the main server. the repeated replicating was giving the main server a load time of 10. After shutting it off and disabling it, the average load of the main server was normal. Then I tried imaging up to 10 computers and up to the 7th computer, the web GUI was unresponsive. The waiting in line slot message on the client computer stopped responding. I believe there’s a huge issue with this release.
will try the php tweaks then if that doesn’t work. will figure out a way to go back to 1.5.0 rc10. redo the node fresh.
edit:
i edit the memory to 256 but could not find “/etc/httpd/conf.d/fog.conf” anywhere to edit. -
@nrg Depending on the distro you are using (that point would also be helpful to know) it may be in this file /etc/apache2/sites-enabled/001-fog.conf
If in doubt you can always search for the file.
cd /etc grep -R -e ":9000" *
That should tell you the name of the file. If you are using debian 9.x we found an issue where php-fpm may not be enabled. Which would cause a very poor performance with the web gui too.
FWIW: If you roll it back to anything I would go to 1.5.0 stable and not RC10.