Remote node on different DHCP and subnet
- FOG Version: 1.4.1 (with fix)
- OS: Debian 8.8
- Service Version:
Trying to set up a deployment node in a remote leased space which has a different DHCP server and is on a different subnet that will still connect to our main server. The DHCP has a specified PXE boot ip and we can’t easily change that. We set up the node to have a static ip on that ip and followed the rest of the node set up instructions. The computers in that lab connect to the node but have a chainloading error boot loop and the node does not appear in the gui dashboard storage node disk usage tab.
Not being able to alter the remote dhcp server is a problem but you can fix if you install dnsmasq on your storage node. Dnsmasq will then supply the pxe boot options you need.
It wouldn’t be a problem if the DHCP server didn’t have any 066/067 options. In this case it does. I think it’ll be hit or miss for proxy-dhcp working in this case. And if it works, it might intermittently work.
So I can’t capture images from a node right?
I’m glad you have it worked out about the sql password. If you changed the sql password from what fog thought it was check your /opt/fog/.fogsettings file to ensure the password in there is in sync with what you changed it to. If you forget the next time you go and upgrade, the upgrade will fail.
Just so I’m clear, you can not capture to a storage node, only deploy. If you want to capture to a remote storage node you will need to install a full fog server there.
I figured it out. We messed around with our sql password. I just fixed it an now it works. So I can’t capture images from a node right?
@george1421 I just updated to be safe but it still says this
The issue is with imaging, because there is a chainloading error. Could it be that the server and the node are on different versions of fog? 1.4.1 and 1.4.2?
@zclift15 Just so I’m clear. The issue isn’t with client imaging. Its related to the information being displayed correctly on the dashboard?
The DHCP SHOULD already be set up, and it seems like it is because the computers PXE boot to the fog node. The problem is that the node cannot connect to our main server for some reason. We can’t seem to get it to appear on the dash.
If you setup the FOG server at the remote location as a storage node, you should have almost all you need. Not being able to alter the remote dhcp server is a problem but you can fix if you install dnsmasq on your storage node. Dnsmasq will then supply the pxe boot options you need. In the case of the dnsmasq server at the remote location you will point dhcp option 66 to the storage node.
You will also want to install the FOG location plugin and define the storage node to a remote location. Then as you add clients you will define those clients to that remote location. That way your clients won’t image over the site to site link.
Just remember all image captures go to the master fog server. Replication happens only from the master node to a storage node.
If you want to capture images at the remote location and then save the images at the remote location then you will need a full fog server there.
Why can’t the existing DHCP options 066 and 067 be easily changed?
If there is some existing service using those options, you would need to add a menu entry in FOG for it so devices at that location can still use that existing service, and then change the DHCP options to point to fog. This way, using the FOG boot menu, people there can still use whatever existing thing they were using before.
If there’s no reason for how it’s set, then the DHCP options simply need changed to point to the FOG Server - then you’re done.