1.3.4 - high cpu load - client login
-
@UWPVIOLATOR This appears, to me, like PHP and Apache are starting before mysql services are running. At least that’s why I imagine those errors are suddenly showing up.
-
Yeah we had to restart those services after reboot. So we are semi stable right now after I disabled all the FOG Services on our clients. What else can we do to test why we keep maxing out the CPU?
Still getting this error.
[Wed Feb 22 09:02:41.373552 2017] [mpm_prefork:error] [pid 28247] AH00161: server reached MaxRequestWorkers setting, consider raising the MaxRequestWorkers setting
-
@UWPVIOLATOR said in 1.3.4 - high cpu load - client login:
ched MaxRequestWorkers s
What’s your client checkin time set to?
-
30min but I have GPO pushed out to stop all FOG Service until we get it stable.
If you look back a few posts. I think encryption is part of the issue. All our clients encryptions were broke and replying back every minute. We tried to reset all but we got errors. So at this point I am not sure all clients encryptions have been reset.
mysql
use fog
UPDATE hosts SET hostPubKey=“”, hostSecToken=“”, hostSecTime=“0000-00-00 00:00:00”; -
@UWPVIOLATOR As long as things are “usable” you might have better luck just running:
UPDATE `hosts` SET `hostPubKey`= '', `hostSecToken` = '';
The hostSecTime is irrelevant at that point.
I don’t think simply clearing them will fix the problem though. I suspect what’s happening is the hosts are being “kicked” out because there’s simply too many.
Another way to achieve the “reset all” would be:
Put all hosts into a group.
From the group you can then reset all data (whether or not there is data to be reset).
-
We tried to put them all into one group before. We have to many hosts and it crashes.
Will try UPDATE
hosts
SEThostPubKey
= ‘’,hostSecTok
= ‘’;We want to leave FOG alone today since we can do imaging while the GPO is in effect. We will then schedule an update to RC10 or newest. Then see if it stays stable. Then start removing the GPO site by site to see if we crash it again.
Did you see the mpm_prefork.conf file I posted below? Does that look right?
-
ERROR 1054 (42S22): Unknown column ‘hostSecTok’ in ‘field list’
-
@UWPVIOLATOR Edited the original, sorry i missed the en at the end.
-
Query OK, 4023 rows affected (0.15 sec)
Rows matched: 21413 Changed: 4023 Warnings: 0 -
What is this doing that it is pulling so much resources? Happening multiple times a day.
-
could you verify for me the path listed for “Snapin Path” in your Storage management master node
-
@Junkhacker /opt/fog/snapins
Sorry for the confusion earlier with us all participating here, I should’ve introduced myself.
-
@andjjru i’m not entirely familar with that part of the code, but the md5sum task must be part of the image replicator service. i knew we did that for snapins, but i didn’t think we did it for images. anyway, perhaps you should try disabling IMAGEREPLICATORGLOBALENABLED until most of the clients have had a chance to check in and reset their keys
-
@Junkhacker Alright I disabled IMAGEREPLICATORGLOBALENABLED and that process went away. Thanks.
-
@UWPVIOLATOR said in 1.3.4 - high cpu load - client login:
What is this doing that it is pulling so much resources? Happening multiple times a day.
Not long ago, a week or so, a change was made so the entire images got hashed instead of just the first 10 megs.
Turn the occurrence of this way down:
Web Interface -> FOG Configuration -> FOG Settings -> FOG Linux Service Sleep Times -> IMAGEREPSLEEPTIME
Set that to something like 24 hours.The default is 600 seconds. What I’m guessing is you have a ton of images and it takes hours to hash them all - thus your FOG Server is always slammed.
-
@Wayne-Workman I will go back to using 10mb. It wasn’t using the first 10mb though. Because it was constantly pinging for traffic across ftp. It only checked the filesizes originally. While this worked, it failed to detect changes in files like d1.partitions that might have been updated.
So I re-added the “file hash” checking as a means but made it so the hashing was done at the “local” node’s rather than at the single “side”.
I am trying to check things out.
I’m thinking about testing the last 10mb of the file though as it’s fully possible the first 10 mb would be the same, but much less likely that the last 10 mb would be.
-
@Tom-Elliott This appears to work for hashing the last 10 megs. Also works for files that are sub-10MB
[root@fog-server Acerbase]# tail -c 10485760 d1p1.img | md5sum 326ea3163c9bc3e202fa323e47f02b23 -
-
@Wayne-Workman I’m probably going to go with sha512sum to ensure less potential of collision (while md5 shouldn’t have too many).
-
@Tom-Elliott Doesn’t really matter what you choose now that we’re only going to hash the last 10 megs. Speed differences in them won’t be noticeable.
-
Updated working-1.3.5.
I want to push up RC-11, but want to hear more back about the init’s (which will have to wait until at least tomorrow I think.)