Rolling FOG out to US Site
-
Here’s an introduction to the location plugin. It doesn’t cover everything or every configuration, but it’ll get you going with it.
https://wiki.fogproject.org/wiki/index.php?title=Location_Plugin -
@george1421 Thanks for this, guys.
My main concern is when I update the images and then want the nodes to update the images they have stored as well. For example, if I update an 80GB image here in the UK, it will take quite a while for the node in the US to update. Is there any workaround to this? Or would I just have to bare with it and try not to update the images as much as I can?The second option you mention sounds like something we could make use of. Is this process documented anywhere? I’m not familiar with exporting the image definitions and then upload them on the remote FOG server. I’m keen to test this, so will probably do so on Monday to see if it would work for us.
Thanks for the suggestions, gents! Much appreciated as always
-
@RobTitian16 Either way you will have the replication time impact your ability to use that new image. Depending on the pipe size between the UK and US you may be talking about days for replication to happen. The bad part of days to replicate. You won’t be able to use the existing target image at the remote sites until the replication completes because FOG uses an in place replication process. You can throttle the bandwidth used for replication but not based on time of day. I’m not trying to turn you off to FOG, just point out a few pain points. Will FOG work for you yes, there are just some things that have to be thought through.
As for the process being documented. No. Its not officially supported setup but it works. The risk is that if its documented then someone might think it is a supported process by the developers. The import and export of the db configurations is done via the web gui so that is pretty straight forward.
The officially supported process is with a single master node and storage nodes at each location. Replication happens just like in the multimaster setup, except that each storage node (Which is a full FOG server with the database) uses the database on the master server. So the remote storage nodes must be in contact with the master node for imaging to work. In the case of a multi-master node the remote sites only get their images from the master node (with the images being pushed from the master node). So each site’s fog server runs independently of the others.
-
If you have technicians in the U.S. that are proficient with Linux, you could overnight an external drive with new images on it and have technicians place these new images, or at least hook up the drive locally for you to do the placing remotely. But you’ve not revealed how fast your link is so even this might be slower than just letting it replicate.
-
@george1421 Thanks for the detailed explanation, George.
It sounds interesting and could be useful to us. However, what if I wanted to schedule an image to be done for the US at a certain time (that’s convenient for them)? Would I then have to log-in to their FOG server to schedule it, or could I log-in to the main UK-based FOG server and schedule it? And if they register any new hosts, would that then come back to the main FOG server and be listed in the web GUI?Finally, is there a way of checking the image replication on the storage node? I’ve currently got a ‘test storage node’ up and running, and I’ve joined it as well as a few images to the same group (Test-US), but I can’t actually see if the images are on the node or not. I just want to ensure that when the US does image a computer, it’s going to go from their storage node instead of across the VPN.
-
@RobTitian16 TBH I forget where we are in this setup. So I’ll try to answer correctly
For a multimaster setup you will need to log into each fog server at the location or region where you want to deploy the image. This has two sides, yes it is more work because you have to know what region you are in to deploy an image. But this also stops you accidentally imaging a machine from the UK when you meant to do that in the US. Right now FOG doesn’t have the level of control to say tech X and only deploy to the US an Tech Y can only deploy to UK if you use a single master node with multiple storage nodes.
The idea of a multimaster setup is that each master node at each location is independent where they don’t need a connection to HQ to be able to image systems. This then reduces the importance of the WAN for image deployment. If you use a single master node with storage nodes then you MUST have the WAN up to be able to image at the remote locations. The decision is yours on how you want to run things.
A down side to the multimaster setup is that you don’t have one place to look for all computers in your organization since each remote site has their own master node with its own sql database.
As for replication… I don’t think this information is visible on how well / where it is in the progress. You can setup replication and let it do its thing. The remote sites won’t see the image until you load the image details on the remote FOG servers (in a multi mode setup) as for the storage node, it shouldn’t let you create the task unless the image is available on the storage node.
IMO FOG needs a little more work in the distributed enterprise area. There are some things that I wish FOG did for my business. But we’ve been able to work around them to have a successful FOG system.
-
You can look at the replication log in the log viewer.
web interface -> fog configuration -> log viewer -> image replication
You can look at sending and receiving bandwidth on the master node and new remote node via the Web interface.
You can look at used space on the remote node, refresh the page to watch it grow or wait for the auto refresh via the Web interface.
You could use
iftop -n
to actively monitor bandwidth at the CLI.You could look to see the number of lftp instances with
ps -aux | grep lftp
You could check the status of the image replicator service with
systemctl status FOGImageReplicator
orservice FOGImageReplicator status
-
@george1421 Thanks for the help It really is appreciated!
I thought it might be easier to export my current FOG server, change the settings to match the US site (i.e. IP address, default gateway, etc.) and then send it to them on a USB.
Are there any FOG settings I would need to change? Obviously on the VM itself I need to change the host name and network settings. -
@RobTitian16 @Wayne-Workman has a script for fog servers with a dynamic IP address (i.e. assigned by dhcp) that would probably help here. That way you can be sure you get all of the places changed. Changing the fog server IP address after setup is a bit of a pain to do manually.
-
@george1421 It also configures dnsmasq by default, remember. if you set
dodnsmasq
to 0 inside of/opt/fog/.fogsettings
after installation, it would keep dnsmasq disabled, but the rest of the bits and pieces would keep the IP updated.We could probably write a simplified version to just change the IP where needed, from the source of the
FOGUpdateIP
tool, all the bits and pieces are there already for fedora/centOS/Debian. -
@Wayne-Workman I’d prefer to use a static IP if possible (but am open to using a DHCP assigned IP if it’s easier to do). I followed this: https://wiki.fogproject.org/wiki/index.php/Change_FOG_Server_IP_Address and am currently updating the US FOG server to the latest version of trunk with the new settings. Would this suffice, or would I need to do additional configuration with the server?
-
@RobTitian16 Nothing additional, those are all the steps involved. DHCP or Static is up to you. I’ve used both just fine, both have strengths where the other has weaknesses. It’s a matter of preference.
-
@RobTitian16 I only mentioned the script and its intended function because it would do what you needed to re-ip a host. But I agree with Wayne maybe a simpler one time run script would be in order, because sometimes you DO have to renumber a FOG server after its been setup.
-
@Wayne-Workman Many thanks, gents. It all seems to be working here in the UK so I’ll ship it out tomorrow and hopefully set up the new VM later on this week/next week.
If I run into any issues I’ll post back -
This post is deleted! -
@Wayne-Workman Thanks for this, Wayne. Is it normal to see:
[11-01-16 9:14:18 am] | Image name: Win7Clientx86
[11-01-16 9:14:18 am] * Found Image to transfer to 1 node(s)On both the master node in the UK and the US storage node? I can’t see any received data on the US FOG server, so I’m concerned that the replication is not working correctly.
-
@RobTitian16 Is the US server a full FOG server or a storage node?
If both are full fog servers then you only configure the master server for the UK, the US fog server shall think its standalone. So you don’t setup any storage groups or anything. That is all done on the UK server.
-
@george1421 The US server is a full FOG server, but it’s listed as a storage node on the UK server. The only storage nodes on the US server are its own.
-
@RobTitian16 Are they associated to groups? If they are, is there another group they can belong to?
Replication only happens on nodes from a master->subordinate system. This is only with per group.
If an image belongs to multiple groups, though, the “primary” group will be the master and it will only distribute to other master nodes (as the master nodes will distribute to their group’s subordinates).
-
@Tom-Elliott Yes, on the UK server the images I want to replicate belong to the primary storage group called UK, and a secondary storage group called US.
The US server is listed as a storage node on the UK server, although I’ve just noticed it isn’t listed as a master node. The UK server is in its own storage group, as is the US server, so I assume all I need to do is enable the US server as a master node and it should then work?
P.S. sorry if it seems like a dumb question… I just don’t want replication to occur the wrong way/have the images wiped.