FOG Unresponsive under "heavy" load
-
I’m wondering if the Ondemand FPM handler is the better choice for FOG in general and such cases specifically.
In my experience Ondemand is only marginally slower than Dynamic or Static, but uses far less RAM on average. It’s also far easier to setup correctly since you don’t require minimum children or anything like that.
The problem with the current set up is that FPM processes that have claimed a lot of RAM will only respawn after they’ve met the request limit which could take ages in certain scenarios.
In Ondemand you can specify the idle timeout so that if a process is doing nothing it will be killed off and the memory freed to the system.
I will also recommend the Event MPM for Apache alongside this. There is little point to remain with Prefork when we are using FPM anyway.
-
I am running Ubuntu 16.04 server in vmware. I was never able to make 1.5.4 multicast until I made the changes outlined here to the www.conf file. I was suffering the same things that fry_p had probloms with. Downloading boot.php would just be “…” for days. Now it works (like it used to).
Thank you, george1421!
Edit: Well, I take that back (a little bit). After a 20-pc multicast session, none of the PC’s were able to ‘update the database’. I had to cancel the session, reboot the fog server, and reboot the PC’s . At least the image was successfully blasted out otherwise I would be having a bad day right about now.
-
@librarymark
And while trying to multicast 8 pcs, now I get this again: -
@librarymark
And after I reboot the server and the multicast actually runs, the PC’s are stuck at this:
and FOG’s webpage says this:
-
@librarymark Do you get php memory exhaustion in the logs?
Would also be interested in seeing your
free -m
andtop
(shift+m) stats when this happens. -
@librarymark said in FOG Unresponsive under "heavy" load:
Edit: Well, I take that back (a little bit). After a 20-pc multicast session, none of the PC’s were able to ‘update the database’. I had to cancel the session, reboot the fog server, and reboot the PC’s . At least the image was successfully blasted out otherwise I would be having a bad day right about now.
It would be interesting to know the memory usage when this broke.
Also just for clarity what updates did you do to the www.conf file, up the memory to 256MB?
-
@librarymark OK for the gateway timeout lets work with that. I think if you look in the apache error file. You will see a php timeout waiting for php-fpm to respond. What we need to do is tell apache to wait a bit before timing out.
About how long does it take to push out your image to 20 computers?
-
@george1421
I just upped the memory in/etc/php/7.1/fpm/pool.d/www.conf:php_admin_value[memory_limit] = 256M
-
@librarymark What I want you to test is outlined in this post: https://forums.fogproject.org/topic/11713/503-service-unavailable-error/40
I want you to update this section
<Proxy "fcgi://127.0.0.1:9000"> ProxySet timeout=500 </Proxy>
Set the timeout in seconds to be just a bit longer than your push time.
-
Where do I find the “push time”?
I edited the file is /etc/apache2/sites-enabled/001-fog.conf, and it now looks like this:
<VirtualHost *:80> <Proxy "fcgi://127.0.0.1:9000"> ProxySet timeout=300 </Proxy> <FilesMatch "\.php$"> SetHandler "proxy:fcgi://127.0.0.1:9000/" </FilesMatch> KeepAlive Off ServerName 10.5.0.61 DocumentRoot /var/www/html/ <Directory /var/www/html/fog/> DirectoryIndex index.php index.html index.htm </Directory> RewriteEngine On RewriteCond %{REQUEST_METHOD} ^(TRACE|TRACK) RewriteRule .* - [F] RewriteCond %{DOCUMENT_ROOT}/%{REQUEST_FILENAME} !-f RewriteCond %{DOCUMENT_ROOT}/%{REQUEST_FILENAME} !-d RewriteRule ^/fog/(.*)$ /fog/api/index.php [QSA,L] </VirtualHost>
Is that correct? In any case I will not be able to test now because we just opened (public library). It might be a few days.
-
@librarymark Right that looks good. Make sure you set the timeout to the right number of seconds. Right now as configured apache will wait 5 minutes for php-fpm to respond before giving up. If your image push time is more than 5 minutes you need to adjust this number.
[edit] Sorry, I was not clear “push time” is the time it takes to send the image to all 20 computers when using a multicast image.
-
@george1421
My multicast sessions usually take about 5-7 minutes to complete. Is that what you mean? -
@librarymark yes that is what I mean. So change that 300 to 500. Understand this is only a test to see if that IS what is causing your multicast failure. If this proves out we will need to get back with the developers to understand why?
-
@george1421
How many versions back do we need to go to bypass this? I think I want to step back in time until the bugs are worked out. Or - what older version of Ubuntu would work better? -
@librarymark Well if its a php-fpm issue, you can just turn it off and bypass the issue.
The down side is if we don’t have different use cases helping us to understand what is going wrong it will never get fixed. But if you want to disable php-fpm I can tell you how its just adding one line to the fog.conf file for apache.
-
@george1421
I am curious what php-fpm gains us. I have never had performance issues before. -
@librarymark Performance of the web ui as well as client checkins. php-fpm is a dedicated php engine, where before the fog developers were using the php engine built into apache. Moving it external allows apache to do more things (like service web page requests) while having a dedicated php engine do what it does the best.
The downside for the performance the rules are stricter for the optimization to work.
-
I’m having the same exact issues ever since updating to 1.5.4.
server: 14.04 LTS ubuntu.
I had 1.5.0 rc10 and it was running smooth. I only updated because I was having issues with 1709 not restarting after getting a host name. I needed .10.16 client and the latest 1.5.3 had it.
Before, with 10 slots open, i could image all 10 computers with an average of 1.5gbp/min. now it’s barely holding at 200mbp/min on each client. the entire web GUI is unresponsive. I also have some computers queued up in line waiting for a slot to open and it would time out mid way.
I have a server node and updated both the main server and node to 1.5.4 and ever since, it’s been busy replicating over and over.
Also the PXE boot bz img part takes much longer to do. I’m going to go back to 1.5.0 rc10
how would I go about downgrading?
-
@nrg Well if you are willing to help us understand what is going on here I’d like to help keep you on 1.5.4.
I want you to do 2 things listed in this thread.
#1 https://forums.fogproject.org/topic/12057/fog-unresponsive-under-heavy-load/5
We need to add in the memory limit of 256M
#2 https://forums.fogproject.org/topic/12057/fog-unresponsive-under-heavy-load/16
Add in the timeout value into your apache configuration. set the timeout value to 500
Then just reboot your fog server after the changes.
-
@george1421 will try it. but I think it might be an issue with my node. Going to rebuild my node (or disable it) to stop it from replicating over and over. Before 1.5.4 update, my node was stable with hard drive use of 70%. Now I can see it constantly deleting images because it says it’s not matching. it would go from 20%-40% hard drive usage use so it’s cycling through images that it think doesn’t match.
will report back.