Rolling FOG out to US Site
This post is deleted!
@Wayne-Workman Many thanks, gents. It all seems to be working here in the UK so I’ll ship it out tomorrow and hopefully set up the new VM later on this week/next week.
If I run into any issues I’ll post back :)
@RobTitian16 I only mentioned the script and its intended function because it would do what you needed to re-ip a host. But I agree with Wayne maybe a simpler one time run script would be in order, because sometimes you DO have to renumber a FOG server after its been setup.
@RobTitian16 Nothing additional, those are all the steps involved. DHCP or Static is up to you. I’ve used both just fine, both have strengths where the other has weaknesses. It’s a matter of preference.
@Wayne-Workman I’d prefer to use a static IP if possible (but am open to using a DHCP assigned IP if it’s easier to do). I followed this: https://wiki.fogproject.org/wiki/index.php/Change_FOG_Server_IP_Address and am currently updating the US FOG server to the latest version of trunk with the new settings. Would this suffice, or would I need to do additional configuration with the server?
Wayne Workman last edited by
@george1421 It also configures dnsmasq by default, remember. if you set
dodnsmasqto 0 inside of
/opt/fog/.fogsettingsafter installation, it would keep dnsmasq disabled, but the rest of the bits and pieces would keep the IP updated.
We could probably write a simplified version to just change the IP where needed, from the source of the
FOGUpdateIPtool, all the bits and pieces are there already for fedora/centOS/Debian.
@RobTitian16 @Wayne-Workman has a script for fog servers with a dynamic IP address (i.e. assigned by dhcp) that would probably help here. That way you can be sure you get all of the places changed. Changing the fog server IP address after setup is a bit of a pain to do manually.
@george1421 Thanks for the help :) It really is appreciated!
I thought it might be easier to export my current FOG server, change the settings to match the US site (i.e. IP address, default gateway, etc.) and then send it to them on a USB.
Are there any FOG settings I would need to change? Obviously on the VM itself I need to change the host name and network settings.
You can look at the replication log in the log viewer.
web interface -> fog configuration -> log viewer -> image replication
You can look at sending and receiving bandwidth on the master node and new remote node via the Web interface.
You can look at used space on the remote node, refresh the page to watch it grow or wait for the auto refresh via the Web interface.
You could use
iftop -nto actively monitor bandwidth at the CLI.
You could look to see the number of lftp instances with
ps -aux | grep lftp
You could check the status of the image replicator service with
systemctl status FOGImageReplicatoror
service FOGImageReplicator status
@RobTitian16 TBH I forget where we are in this setup. So I’ll try to answer correctly
For a multimaster setup you will need to log into each fog server at the location or region where you want to deploy the image. This has two sides, yes it is more work because you have to know what region you are in to deploy an image. But this also stops you accidentally imaging a machine from the UK when you meant to do that in the US. Right now FOG doesn’t have the level of control to say tech X and only deploy to the US an Tech Y can only deploy to UK if you use a single master node with multiple storage nodes.
The idea of a multimaster setup is that each master node at each location is independent where they don’t need a connection to HQ to be able to image systems. This then reduces the importance of the WAN for image deployment. If you use a single master node with storage nodes then you MUST have the WAN up to be able to image at the remote locations. The decision is yours on how you want to run things.
A down side to the multimaster setup is that you don’t have one place to look for all computers in your organization since each remote site has their own master node with its own sql database.
As for replication… I don’t think this information is visible on how well / where it is in the progress. You can setup replication and let it do its thing. The remote sites won’t see the image until you load the image details on the remote FOG servers (in a multi mode setup) as for the storage node, it shouldn’t let you create the task unless the image is available on the storage node.
IMO FOG needs a little more work in the distributed enterprise area. There are some things that I wish FOG did for my business. But we’ve been able to work around them to have a successful FOG system.
@george1421 Thanks for the detailed explanation, George.
It sounds interesting and could be useful to us. However, what if I wanted to schedule an image to be done for the US at a certain time (that’s convenient for them)? Would I then have to log-in to their FOG server to schedule it, or could I log-in to the main UK-based FOG server and schedule it? And if they register any new hosts, would that then come back to the main FOG server and be listed in the web GUI?
Finally, is there a way of checking the image replication on the storage node? I’ve currently got a ‘test storage node’ up and running, and I’ve joined it as well as a few images to the same group (Test-US), but I can’t actually see if the images are on the node or not. I just want to ensure that when the US does image a computer, it’s going to go from their storage node instead of across the VPN.
If you have technicians in the U.S. that are proficient with Linux, you could overnight an external drive with new images on it and have technicians place these new images, or at least hook up the drive locally for you to do the placing remotely. But you’ve not revealed how fast your link is so even this might be slower than just letting it replicate.
@RobTitian16 Either way you will have the replication time impact your ability to use that new image. Depending on the pipe size between the UK and US you may be talking about days for replication to happen. The bad part of days to replicate. You won’t be able to use the existing target image at the remote sites until the replication completes because FOG uses an in place replication process. You can throttle the bandwidth used for replication but not based on time of day. I’m not trying to turn you off to FOG, just point out a few pain points. Will FOG work for you yes, there are just some things that have to be thought through.
As for the process being documented. No. Its not officially supported setup but it works. The risk is that if its documented then someone might think it is a supported process by the developers. The import and export of the db configurations is done via the web gui so that is pretty straight forward.
The officially supported process is with a single master node and storage nodes at each location. Replication happens just like in the multimaster setup, except that each storage node (Which is a full FOG server with the database) uses the database on the master server. So the remote storage nodes must be in contact with the master node for imaging to work. In the case of a multi-master node the remote sites only get their images from the master node (with the images being pushed from the master node). So each site’s fog server runs independently of the others.
@george1421 Thanks for this, guys.
My main concern is when I update the images and then want the nodes to update the images they have stored as well. For example, if I update an 80GB image here in the UK, it will take quite a while for the node in the US to update. Is there any workaround to this? Or would I just have to bare with it and try not to update the images as much as I can?
The second option you mention sounds like something we could make use of. Is this process documented anywhere? I’m not familiar with exporting the image definitions and then upload them on the remote FOG server. I’m keen to test this, so will probably do so on Monday to see if it would work for us.
Thanks for the suggestions, gents! Much appreciated as always :)
Wayne Workman last edited by
Here’s an introduction to the location plugin. It doesn’t cover everything or every configuration, but it’ll get you going with it.
Rob, you have a couple of options here depending on how you want to manage your FOG install and how fast (big) is your UK to US link.
One option is to setup a storage node in the US. If you are using FOG 1.3.0-RCx then the storage nodes already have the tftp kit built in. In this case you will surely want to use the location plugin with FOG and then define a location for the UK and a second one for the US. Actually I would create a location for every physical location you have. That way the FOG clients will always connect and image from the storage node that is closet to them. The down side to this is if you are using the FOG client on the target computers they will check into “ping” the FOG server every 5 minutes for new instructions. If you have a vary large number of computers this check in may consume all of your site to site communications.
The other method is not currently supported in fog but works very well and what I use at my company. This is called a multi-master node setup. In this case each location has their own FOG server that the local IT techs use like a stand alone server. But in our case the images are managed at HQ. So what we have is at HQ a development FOG server (where we create and test the new images). ON that HQ development server we have a storage group setup. In that storage group we have the HQ development server as the master and each site’s FOG (master) server setup as storage nodes. So when we approve and release a new image at HQ it is replicated to each site’s FOG server automatically. As long as you are only updating master images this process works flawlessly. If you add a new image to the Development FOG server at HQ, you must export the image definitions and then import them on each of the sites fog server. Understand that replication will happen automatically, just the sites will not be able to see this new image until you upload the image definition on the remote sites FOG server. While this process sounds a bit complicated its not. Plus it has the advantage that each site’s IT logs into their own local FOG server so they can’t accidentally deploy an image to a remote sites computer (i.e. Site A tech can’t deploy an image to site B’s computer by accident).
@RobTitian16 this is the beauty of the location plugin. It allows you to centrally manage while ensuring hosts only pull from their relevant location for images.
I don’t know what version you’re planning to use but I’d recommend the current RC series as it also allows you to install tftp to the storage node.
Images are able to he stored to multiple groups in the rcs as well which enable you to replicate images between groups. The initial replication may take a while though due to speed of connection and size of the image or images you’re trying to replicate. The rcs also have this same ideology for snapins.
For automating things I’d highly recommend using the new fog client. While there are many reasons to use the client one of the biggest for your use case, if you’re using snapins at least, is the snapins will also be downloaded from the location defined for the host.
Hopefully this at least points you in the right direction.