Issues with Fog 1.5.2
-
@k-hays Ok so you’ve confirmed that from the 10.10.0.x subnet you can ping the fog server? If so then it may be that double backslash in the url after fog. If that is the case, please check the fog settings as well as the storage node configuration to see if you have a training backslash.
-
Welp i goofed. I’m going to have to get that website file again for you because apparently i forgot to reinstall 1.5.2 before getting the file. If there are any differences between the two i will let you know. As far as your most recent question yes, we can ping from that subnet( The clients also do still connect, see second picture). There was not a second / in the storage node but there was under TFTP PXE KERNEL DIR
-
Due to hardware changes and accompanying issues (and time restraints) we are just reloading our fog server on a different machine. Sorry for the delay, summer is our busiest time and of course when we run into the most issues :,D Thank you though!
-
@K-Hays So is this solved from your point of view?
-
@sebastian-roth Yessir!
-
Welpppp, I suppose not. We were running this server on a hyper v and decided to just try out to separate options. We had a new computer we were going to try and load it on, as well as setting up a new hyper v with debian instead. Both fresh installs had the same error when we tried to test either of them. I went ahead and checked the file the George mentioned prior ( On the Hyper V server) and this is the outcome.
-
@george1421 I adjusted the file to match what you put in this post https://forums.fogproject.org/topic/11797/updated-from-fog-v1-50-to-v1-52-issue/25
Will test soon. Any other ideas?
-
@k-hays Can you deploy an image to a single client without issue?
This seems to me like an issue with PHP-FPM getting overloaded.
On debian the pm.max_children directive is often stuck on 5 which is generally too low for most people. (40 is a good starting point for testing)
-
@quazz Yes. Single client works fine. I don’t know the specific number in which it starts to fail; but in a lab we image anywhere from 20 - 35ish computers and it seems to fail whenever we try those numbers. The issue also persisted when we started with ubuntu.
-
@k-hays There’s currently some known issues with PHP-FPM settings/getting overloaded, particularily on debian based systems.
I definitely recommend checking the pm.max_children value (not 100% sure where it is on debian)
You should be able to check the PHP-FPM logs on the WebUI to see if it’s been complaining about not enough max_children or memory exhaustion or timeouts I think.
-
@quazz I’ll go digging. Any ideas?
-
@k-hays Try
grep -irl pm.max_children /etc
-
@quazz Ok I found it! Now, what do you think the max i should put there is. Sometimes we might image up to two, maybe three labs at a time. would 90 be a stretch?
-
@k-hays It depends on how many resources your server has available to itself.
We haven’t done a ton of testing on the exact numbers. The value will depend on the amount of RAM and how much the average PHP-FPM uses.
Start with 40 as a safe value and go from there, imo.
-
@quazz Ok will do. I’ll end up testing this today as well. We’re using a server that has a lot of unused resources so we can devote a good bit to it. I also want to say that we are on the current dev-branch, so 1.5.4. These were all fresh builds as well, even on ubuntu. Does it automatically set the max children to 5 now?
-
@k-hays said in Issues with Fog 1.5.2:
. would 90 be a stretch?
Don’t do that much you could run into a situation of resource exhaustion. Under normal imaging you shouldn’t see more than 10 worker thread. For max children I would set it to 40.
If you are using FOG 1.5.2 or later here is what I would change in the www.conf file for php-fpm.
php_admin_value[memory_limit] = 256M pm.max_requests = 2000 pm.max_children = 40 pm.min_spare_servers = 6 pm.start_servers = 5
Update those settings then restart php-fpm.
Queue up your multicast then on the fog server linux console start
top
then pressP
to sort by CPU usage. Watch, you should have 5-7 php-fpm worker threads running, they should be the top cpu users. -
@george1421 Ok, does it make a difference if we do unicast? Just wondering
-
@k-hays The memory setting will help the unicast imaging too
But just to be sure none of your conditions have changed since your original post of 2 months ago? You are still having the same exact condition?
-
In terms of network yes. The server itself is the same, but it’s been reloaded to debian 9 and 1.5.4 as opposed to ubuntu and 1.5.2.
-
@k-hays Also if you meant the error, then yes. It is the exact same issue (or at least extremely similar). I made the changes and will test it again shortly.