Storage Nodes stop reporting after a while
-
It would also appear that Apache is upping the file descriptors limits as it can - but it just can’t keep up, because there are so many files getting opened so fast, and not closed.
-
Figured out the bandwidth monitor is the primary file that’s getting so many calls.
bandwidth.php
Currently hunting down every place where it’s used - including it’s define line in
config.class.php
calledSTORAGE_BANDWIDTHPATH
-
[root@mastaweb fog]# grep -rl 'fopen' . ./status/logtoview.php ./lib/fog/eventmanager.class.php ./lib/fog/reportmaker.class.php ./lib/fog/hookmanager.class.php ./lib/fog/tasktype.class.php ./lib/fog/fogbase.class.php ./lib/fog/schema.class.php ./lib/fog/fogpage.class.php ./lib/pages/fogconfigurationpage.class.php ./lib/db/mysqldump.class.php ./lib/service/fogservice.class.php ./lib/client/snapinclient.class.php ./client/download.php [root@mastaweb fog]# grep -rl 'readfile' . [root@mastaweb fog]# grep -rl 'file_get_content' . ./status/bandwidth.php ./lib/fog/fogbase.class.php
These are a list of all files that are calling fopen and file_get_contents.
-
I’m struggling to find the file descriptor leak.
For now, I’ve patched together a way to keep things going. This is a CRON script that executes ssh commands against remote servers - and uses existing pki ssh authentication. I’ve set it up to run every 2 minutes.
echo=$(command -v echo) ssh=$(command -v ssh) array=( aifog annex bmfog clfog cvfog ckfog dufog fmfog hffog jwfog lhfog prfog rofog wgfog ) for i in "${array[@]}" do $ssh $i "lsof -l -u apache | wc -l > /root/apacheOpenFiles.txt" > /dev/null 2>&1 openApacheFiles=$($ssh $i "cat /root/apacheOpenFiles.txt") if [[ "$openApacheFiles" -gt 8000 ]]; then $ssh $i "killall --user apache" > /dev/null 2>&1 fi done
-
Additionally - apache error logs are filling up really fast on all the storage nodes from this issue. I’m going to have to script clearing them.
-
Cross-linking similar thread:
https://forums.fogproject.org/topic/8314/fog-log-viewer-for-nodes-giving-error -
Update on this issue - I can reproduce it at home quite easily.
Using just my laptop, I simulated many users at once just by opening 10 tabs and letting them sit on the fog dashboard.
Very, very quickly, my one storage node at home stopped reporting it’s version and interface.
Even after closing all but one tab, it continued to not respond.
Checking the number of open files by the user
apache
with the commandlsof -l -u apache | wc -l
the number of open files after just a few short moments is47029
.After waiting a while - and without taking any corrective action - the number of open files by apache dropped to
11812
and the node started reporting it’s version and interface again.My guess is either Linux, or httpd is closing the files on it’s own after some time, because it might have it’s very own built-in cleanup mechanisms. I don’t think the files are being closed properly by FOG.
So - my guess is that 2 to 4 people at work have the fog dashboard open and just let it sit, all day long. This isn’t necessarily wrong to do, and FOG should be able to cope with this.
-
Tom worked on a patch quietly since I started this thread - and I’ve tested it - and it works.
My storage node at home that I tested on is an old Pentium 4 with an IDE drive in it - so it’s very slow.
Results at home:I could open 12 pages and let them sit on the fog dashboard - CPU usage on the storage node stayed below 4, there weren’t uncontrollable httpd processes spawning, Open files by the apache user stayed below 4k - and the bandwidth chart not only reported - but it’s now reporting more smoothly than ever, is actually accurate now, and doesn’t have spells of mental disability anymore. It’s like butter.
The issue, as it was explained to me, is that the JS which renders the bandwidth chart and does the polling of
bandwidth.php
on all nodes enabled for bandwidth reporting - it wasn’t waiting for a response, it would just re-issue another poll before the response was received. The more dashboards open, the worse it got, until eventually apache was unable to do anything do to it’s tremendous load of ‘stacked’ processes that were back-logged. I didn’t tell anyone but Tom - but this (now solved) issue would break imaging at remote locations, too - because we use the location plugin and apache on the remote nodes is relied upon to get imaging done in this setup. That too is solved now.These fixes will all be in RC-9.