Storage Nodes stop reporting after a while
-
Here is a more detailed post about it:
-
I’ve found this site that shows how to set new limits properly on Ubuntu.
https://www.drupal.org/node/474152
I’ve converted the commands for CentOS 7/Fedora/RHEL - and made it into an SSH command ran against a remote server.
ssh aifog "echo 'apache soft nofile 1600' >> /etc/security/limits.conf;echo 'apache hard nofile 65535' >> /etc/security/limits.conf;systemctl restart httpd"
Note that the above is an ssh command that uses aliases and pki-based certificates - that my project called ssh-pki-setup has already established for me. That command executes the commands in double quotes against the remote server, and it successfully added the options to the remote server’s file and restarts httpd.
If successful, I’ll do this for all storage nodes - and post commands that can be ran manually too if you don’t have so many servers to manage as I do and prefer manual.
-
After the last reboot - all storage nodes are still reporting. But I assume they will eventually fail. I’m just waiting for that to happen - to see if the one I modified hangs in there.
In the mean time, I’ve been watching the number of files open by apache with this command I put together:
lsof -l -u apache | wc -l
Open files have been steadily growing:
[root@wgfog ~]# lsof -u apache | wc -l lsof: WARNING: can't stat() fuse.gvfsd-fuse file system /run/user/42/gvfs Output information may be incomplete. 6316 [root@wgfog ~]# lsof -u apache -l | wc -l lsof: WARNING: can't stat() fuse.gvfsd-fuse file system /run/user/42/gvfs Output information may be incomplete. 6412 [root@wgfog ~]# lsof -l -u apache | wc -l lsof: WARNING: can't stat() fuse.gvfsd-fuse file system /run/user/42/gvfs Output information may be incomplete. 6424 [root@wgfog ~]# lsof -l -u apache | wc -l lsof: WARNING: can't stat() fuse.gvfsd-fuse file system /run/user/42/gvfs Output information may be incomplete. 6432 [root@wgfog ~]# clear [root@wgfog ~]# lsof -l -u apache | wc -l lsof: WARNING: can't stat() fuse.gvfsd-fuse file system /run/user/42/gvfs Output information may be incomplete. 6444 [root@wgfog ~]# lsof -l -u apache | wc -l lsof: WARNING: can't stat() fuse.gvfsd-fuse file system /run/user/42/gvfs Output information may be incomplete. 6448 [root@wgfog ~]# [root@wgfog ~]# lsof -l -u apache | wc -l lsof: WARNING: can't stat() fuse.gvfsd-fuse file system /run/user/42/gvfs Output information may be incomplete. 6448 [root@wgfog ~]# lsof -l -u apache | wc -l lsof: WARNING: can't stat() fuse.gvfsd-fuse file system /run/user/42/gvfs Output information may be incomplete. 6456 [root@wgfog ~]# lsof -l -u apache | wc -l lsof: WARNING: can't stat() fuse.gvfsd-fuse file system /run/user/42/gvfs Output information may be incomplete. 6456 [root@wgfog ~]# lsof -l -u apache | wc -l lsof: WARNING: can't stat() fuse.gvfsd-fuse file system /run/user/42/gvfs Output information may be incomplete. 6464 [root@wgfog ~]# lsof -l -u apache | wc -l lsof: WARNING: can't stat() fuse.gvfsd-fuse file system /run/user/42/gvfs Output information may be incomplete. 6472 [root@wgfog ~]# lsof -l -u apache | wc -l lsof: WARNING: can't stat() fuse.gvfsd-fuse file system /run/user/42/gvfs Output information may be incomplete. 7010 [root@wgfog ~]# lsof -l -u apache | wc -l lsof: WARNING: can't stat() fuse.gvfsd-fuse file system /run/user/42/gvfs Output information may be incomplete. 7538 [root@wgfog ~]# [root@wgfog ~]# lsof -l -u apache | wc -l lsof: WARNING: can't stat() fuse.gvfsd-fuse file system /run/user/42/gvfs Output information may be incomplete. 6444 [root@wgfog ~]# lsof -l -u apache | wc -l lsof: WARNING: can't stat() fuse.gvfsd-fuse file system /run/user/42/gvfs Output information may be incomplete. 6448 [root@wgfog ~]# [root@wgfog ~]# lsof -l -u apache | wc -l lsof: WARNING: can't stat() fuse.gvfsd-fuse file system /run/user/42/gvfs Output information may be incomplete. 6448 [root@wgfog ~]# lsof -l -u apache | wc -l lsof: WARNING: can't stat() fuse.gvfsd-fuse file system /run/user/42/gvfs Output information may be incomplete. 6456 [root@wgfog ~]# lsof -l -u apache | wc -l lsof: WARNING: can't stat() fuse.gvfsd-fuse file system /run/user/42/gvfs Output information may be incomplete. 6456 [root@wgfog ~]# lsof -l -u apache | wc -l lsof: WARNING: can't stat() fuse.gvfsd-fuse file system /run/user/42/gvfs Output information may be incomplete. 6464 [root@wgfog ~]# lsof -l -u apache | wc -l lsof: WARNING: can't stat() fuse.gvfsd-fuse file system /run/user/42/gvfs Output information may be incomplete. 6472 [root@wgfog ~]# lsof -l -u apache | wc -l lsof: WARNING: can't stat() fuse.gvfsd-fuse file system /run/user/42/gvfs Output information may be incomplete. 7010
-
Ok, the node I’ve been monitoring open files on - just fell off.
And - the magical limit is…
8006
files open by Apache - causes everything to just stop.And… the one storage node I modified… it stopped reporting too…
-
This article seems relevant but I don’t see how it could help in the case of FOG. https://httpd.apache.org/docs/current/vhosts/fd-limits.html
-
I think this must be a file descriptors leak in the storage node configuration.
I’m going to patch it myself after lunch.
-
It would also appear that Apache is upping the file descriptors limits as it can - but it just can’t keep up, because there are so many files getting opened so fast, and not closed.
-
Figured out the bandwidth monitor is the primary file that’s getting so many calls.
bandwidth.php
Currently hunting down every place where it’s used - including it’s define line in
config.class.php
calledSTORAGE_BANDWIDTHPATH
-
[root@mastaweb fog]# grep -rl 'fopen' . ./status/logtoview.php ./lib/fog/eventmanager.class.php ./lib/fog/reportmaker.class.php ./lib/fog/hookmanager.class.php ./lib/fog/tasktype.class.php ./lib/fog/fogbase.class.php ./lib/fog/schema.class.php ./lib/fog/fogpage.class.php ./lib/pages/fogconfigurationpage.class.php ./lib/db/mysqldump.class.php ./lib/service/fogservice.class.php ./lib/client/snapinclient.class.php ./client/download.php [root@mastaweb fog]# grep -rl 'readfile' . [root@mastaweb fog]# grep -rl 'file_get_content' . ./status/bandwidth.php ./lib/fog/fogbase.class.php
These are a list of all files that are calling fopen and file_get_contents.
-
I’m struggling to find the file descriptor leak.
For now, I’ve patched together a way to keep things going. This is a CRON script that executes ssh commands against remote servers - and uses existing pki ssh authentication. I’ve set it up to run every 2 minutes.
echo=$(command -v echo) ssh=$(command -v ssh) array=( aifog annex bmfog clfog cvfog ckfog dufog fmfog hffog jwfog lhfog prfog rofog wgfog ) for i in "${array[@]}" do $ssh $i "lsof -l -u apache | wc -l > /root/apacheOpenFiles.txt" > /dev/null 2>&1 openApacheFiles=$($ssh $i "cat /root/apacheOpenFiles.txt") if [[ "$openApacheFiles" -gt 8000 ]]; then $ssh $i "killall --user apache" > /dev/null 2>&1 fi done
-
Additionally - apache error logs are filling up really fast on all the storage nodes from this issue. I’m going to have to script clearing them.
-
Cross-linking similar thread:
https://forums.fogproject.org/topic/8314/fog-log-viewer-for-nodes-giving-error -
Update on this issue - I can reproduce it at home quite easily.
Using just my laptop, I simulated many users at once just by opening 10 tabs and letting them sit on the fog dashboard.
Very, very quickly, my one storage node at home stopped reporting it’s version and interface.
Even after closing all but one tab, it continued to not respond.
Checking the number of open files by the user
apache
with the commandlsof -l -u apache | wc -l
the number of open files after just a few short moments is47029
.After waiting a while - and without taking any corrective action - the number of open files by apache dropped to
11812
and the node started reporting it’s version and interface again.My guess is either Linux, or httpd is closing the files on it’s own after some time, because it might have it’s very own built-in cleanup mechanisms. I don’t think the files are being closed properly by FOG.
So - my guess is that 2 to 4 people at work have the fog dashboard open and just let it sit, all day long. This isn’t necessarily wrong to do, and FOG should be able to cope with this.
-
Tom worked on a patch quietly since I started this thread - and I’ve tested it - and it works.
My storage node at home that I tested on is an old Pentium 4 with an IDE drive in it - so it’s very slow.
Results at home:I could open 12 pages and let them sit on the fog dashboard - CPU usage on the storage node stayed below 4, there weren’t uncontrollable httpd processes spawning, Open files by the apache user stayed below 4k - and the bandwidth chart not only reported - but it’s now reporting more smoothly than ever, is actually accurate now, and doesn’t have spells of mental disability anymore. It’s like butter.
The issue, as it was explained to me, is that the JS which renders the bandwidth chart and does the polling of
bandwidth.php
on all nodes enabled for bandwidth reporting - it wasn’t waiting for a response, it would just re-issue another poll before the response was received. The more dashboards open, the worse it got, until eventually apache was unable to do anything do to it’s tremendous load of ‘stacked’ processes that were back-logged. I didn’t tell anyone but Tom - but this (now solved) issue would break imaging at remote locations, too - because we use the location plugin and apache on the remote nodes is relied upon to get imaging done in this setup. That too is solved now.These fixes will all be in RC-9.