2 NIC Host: Set 1 NIC to Remote Management/Replication and 2 NIC to Imaging
-
Hello,
I have a Master and a Storage Node both configured with 2 NICs (eno1 and enp2s0). I want eno1 to be connected to my internal network for remote management as well as image replication. I want enp2s0 for imaging (DHCP, TFTP, PXE).
In the FOG console under Storage, if I set the interface and IP to what is staticed on eno1, replication works, but imaging doesn’t. If I set the IP and interface to what is staticed on enp2s0, imaging works, but replication doesn’t. What file do I need to edit to set it so that replication works on eno1 but imaging only on enp2s0?
Any help would be appreciated!
Thank you!
Ryan -
@rtarr FOG is designed to have one imaging network.
(I’m going to read between the lines here, if this isn’t how you have it setup then please claify)
All devices on that network must be able to reach the master node during pxe booting to find out where its assigned storage node (master or slave) is located. Replication between the master and storage nodes also happen in the identified imaging network, using the IP addresses defined storage configuration panels. Dual IP addresses for FOG imaging infrastructure is not supported.
With that said there are some things you can do.
Your fog server can function as you mentioned with a management nic and an imaging nic. The only thing the management nic can do is manage your FOG server, it can’t be used for any imaging functions.
I’m suspecting that you have a master node at one location and a storage node at a different location where the defined imaging networks are not connected for some reason. That is why you want to replicate the raw data on a different interface than our imaging network. Can this be done, not with FOG, but it can be done. Simply disable the FOG Replication service then use rsync and cron to schedule to replicate the /images directory between the master node and storage node on a timed basis. rsync also has bandwidth restrictions if you need to slow down the transfer. With rsync, just use the IP address on whatever network you want to send the raw files across.
Just be aware that the storage node must be in contact with the FOG server 100% of the time or imaging won’t happen in the remote location, because the storage node is using the database from the master node to get its instructions.
-
@george1421 I am using two NICs because I want all the imaging to happen on a segregated network and any repo updates or image replication to happen over the LAN NIC. This prevents imaging from happening over our firewall tunnels.
With that said, I have the Location plugin enabled and setup so that inits/kernels are pulled from the local storage node. However, you are saying the remote storage node needs connected to the master for the duration of the image because it is using the Master’s database. So if I still want imaging to not go over our firewall tunnels (ie pushing a 30-60GB image to a device between the master and remote storage node) I need Location enabled and then the image replicated. Instead of having dhcp and tftp happen on the imaging NIC, I could just use 1 NIC on our LAN and the location plugin will ensure the image comes locally from the storage node? That way the device being imaged can pull from the master for instructions and then the bulk data transfer stays local. Do I have that right?
-
@rtarr said in 2 NIC Host: Set 1 NIC to Remote Management/Replication and 2 NIC to Imaging:
I am using two NICs because I want all the imaging to happen on a segregated network and any repo updates or image replication to happen over the LAN NIC. This prevents imaging from happening over our firewall tunnels.
Having a second nic and dedicated imaging network may not be necessary to avoid imaging over the vpn tunnel.
(connect both master node and storage node to your business network)
At the remote site set dhcp options 66 to local storage node IP address and dhcp option 67 to ipxe.efi. -OR- use either dhcp profiles or dnsmasq running on the remote storage node to point to the remote storage node for pxe booting. This will keep from downloading the ipxe boot loader over the vpn tunnel.
At HQ again setup dhcp option 66 to master node ip address and dhcp option 67 to ipxe.efi. On the master node installed the Location plugin. Create your two different locations and assign the storage nodes to those locations.
Now assign the target computers to the proper location, either via the web ui or when you do a full registration.
So at this point you have a local and remote location defined. You have the master node assigned to the local location, the remote storage node to the remote location. You now have the target computers assigned to the proper location.
So now you pxe boot a computer at the remote location. I will pull ipxe.efi from the storage node. The target computer will then make a http call to the master node to find out where its storage node is located. Then if the storage node is local to the pxe booting computer it will image all at the remote site. There will be small update http calls to the master node to let the FOG server where it is in the imaging process.
The only thing you can’t do in this setup is multicast imaging, that can only happen from the master node in any storage group.