A pending host is a host with the FOG Client installed that is not registered in your FOG Server.
That FOG Client is calling home to your FOG Server and is added as a pending host - because it is unknown and therefore untrusted until you (the admin) claim it. Claiming is basically saying ‘Yes, that’s one of ours, trust it’

Best posts made by Wayne Workman
-
RE: Whats a pending host?
-
RE: Wrong Network Interface
@imagingmaster21 Are you sure you have 1.5.0 installed? The latest stable is 1.5.2 and if you’re not using that, I would recommend you try it.
-
RE: Error uploading image
@john-johnson Probably just bad credentials set for your storage node. This article will get you going: https://wiki.fogproject.org/wiki/index.php?title=Troubleshoot_FOG
-
RE: Is it possible to Capture to Storage Node only? (instead of master node)
Tom is referring to FOG’s group-to-group replication.
- An image has one group as it’s primary group, but can be associated to many storage groups.
- The image will always capture to the primary group’s master storage node.
- Replication looks for images that belong to multiple groups - and replicates from the primary master to the other associated group’s master nodes.
- Replication then replicates images from each group’s masters to other ‘regular’ storage nodes in the master’s group.
- A storage node can belong to multiple storage groups - you just need a storage node entry for each. For example, a non-master in one group can be a master in another group.
See this for more information: https://wiki.fogproject.org/wiki/index.php?title=Replication
-
RE: Capturing an Ubuntu 18.04 corrupts filesystem on client
@tywyn Ubuntu by default sets up LVM partitions - and FOG cannot resize LVM partitions. During Ubuntu 18.04 installation, you will need to manually configure the partitions - and not use LVM. FOG can resize Ext4 type partitions, so use those for everything except swap space.
-
RE: capture image error mounting /dev/sda3 failed
@amerhbb The error says the filesystem is not clean, so please read through this: https://wiki.fogproject.org/wiki/index.php?title=Windows_Dirty_Bit
-
RE: 503 Service Unavailable
There’s also a script that will update all the spots with the new IP: https://github.com/FOGProject/fog-community-scripts/tree/master/updateIP
-
RE: Is it possible to capture a TPM enabled computer's image?
@vince-villarreal Yep. This can be automated with group policy though. When a box joins your domain automatically via the FOG Client, you can have group policy turn on TPM. I suppose TPM would need turned off somehow via postinit scripts.
-
RE: FOG Offline Install
@bbebz3 ‘Downloading binaries needed’ here: https://github.com/FOGProject/fogproject/blob/d9e7a329a6ec6384593c75df3026ca0b46efa00f/lib/common/functions.sh#L1927
I’d probably recommend you just use a
grep -r 'wget' .
orgrep -r 'curl' .
to search for web calls in the installation code. -
RE: Nodes not behaving properly on a rebuild
@lpetelik said in Nodes not behaving properly on a rebuild:
but not under the Dashboard nor under FOG Configuration, Kernel versions.
Any errors in the apache log? On CentOS 7 that should be
/var/log/http/error.log
Also about what @Quazz said, the nodes would need to be updated with the new master’s IP address - that should be a matter of updating/opt/fog/.fogsettings
and re-running the installer on them. Also - the Node FOG Version and the master FOG Version must match or things won’t work right. -
RE: Nodes not behaving properly on a rebuild
@lpetelik Good deal. If you hit other issues, feel free to post here so we can try to help.
-
RE: Mirror FOG database across two servers
@mattf You could separate the fog servers and the database - so that both servers use the same database on a dedicated VM for this purpose. In this scenario, the fog server would not be able to process queries as quickly because each query would need to traverse the network. Another option is you could use MariaDB 10.x Galera to create a database cluster between the two fog servers. Based on my own experiences with both of these things and considering how heavy FOG is on the database, I’d opt for a Galera database cluster to keep the database local to each server. Here is information on that: https://mariadb.com/kb/en/library/getting-started-with-mariadb-galera-cluster/
-
RE: Fresh Debian 9 FOG server install no database?
@fpuser Maybe you fat-fingered the password entry? Try again. Also, what version of FOG?
Also, there would be errors in the installation log. This is in a new directory the installer makes inside the bin directory. -
RE: About Fog image size limit
@johnny-t said in About Fog image size limit:
I want to transfer to the other system , and only send the image file.
Of course, and there’s a Wiki article written exactly for this: https://wiki.fogproject.org/wiki/index.php?title=Migrate_images_manually
-
RE: Connection time out (4c0a6035)
@tlyneyor said in Connection time out (4c0a6035):
However unfortunately I am running through the imagine process and it is painfully slow. To the point where selecting the image and the group etc needed 3 or 4 attempts each, and it is currently downloading the image at around 50MB/min.
It screams network issue to me, but there are no other network problems at all that I can see, and I have run another machine in the same switch with no latency or connectivity problems.That sounds like it could be a duplicate IP. You’ll get a ton of slowness and intermittent transmissions with a duplicate IP.
-
RE: FTP Login goes to FOG User Home Dir instead of dir in WebUI
@lovejw2 We need more details. Please answer these:
- What version of FOG?
- What OS & version?
- What’s the context?
- What’s broken?
-
RE: FTP Login goes to FOG User Home Dir instead of dir in WebUI
@lovejw2 said in FTP Login goes to FOG User Home Dir instead of dir in WebUI:
When I connect to the server via FTP the folder it connects to is the FOG user home dir and not the /images directory that is set in the FOG WebUI
I think that is normal behavior. Also, an FTP issue (which I don’t think you have) would not prevent an image deployment from being successful.
When you say massive errors on the main drive, what do you mean? Was the disk corrupted? Where did this image that failed to deploy come from? That bad disk? Also, the image definition you create in the web GUI must be exactly the same for certain fields otherwise the image will fail to deploy.
See these for more details:- https://wiki.fogproject.org/wiki/index.php?title=Migrate_images_manually
- https://wiki.fogproject.org/wiki/index.php?title=Migrate_FOG
Do you have a good backup of your old fog server, or a good backup of the images, or a good backup of the database? We can help you with restoring those.
-
RE: Integrity Plugin
@kagashe The integrity plugin is not code-complete, I think. You can use
md5-sum
orsha256-sum
to check the hashes of your image files manually though. -
RE: hello win7 install document DHCP issue
@china-boy There’s not enough information in your post.
-
RE: Connection time out (4c0a6035)
@tlyneyor Glad I could help. I’ve seen duplicates a lot, so I’m really familiar and very weary of them.
An easy way to detect rogue DHCP services is to do a packet capture on a linux box while issuing the commands to release the current lease and get a new one:
dhclient -r; dhclient
. In the packet capture, you would see the rogue DHCP server’s address and MAC address.Also, there is a way on Linux to check a specific IP address to see if there are duplicates:
arping -D -q -I eth0 -c 2 192.168.1.250 ; echo $?
If you get a 0 back from that command, it means there are no duplicates. 1 or anything else that isn’t 0 means there are duplicates. Note thateth0
in the command is the name of the interface you are using to send the request out of and192.168.1.250
is the IP being tested. This does not need to be run on the affected computer, and probably should be executed on a normal-functioning system.