How to sync StorageGroup masters with default group?
-
From what I’ve read @ https://wiki.fogproject.org/wiki/index.php?title=Managing_FOG#Storage_Management, it appears that FOG will only replicate images to other members of any given storage group. If true, I’m looking for a way to sync the /images dir among masters. I know there’s rsync but…
We are a global company with offices in 6+ locations around the globe but minimal IT staff at the other offices. What I’d like to do with FOG is to centrally create all images in SF but push those out to all ‘is master’ SNs who are the masters for geographically designated StorageGroups (e.g. dublin, zurich, etc). This should keep images from downloading to remote clients from SF so that all clients can get their images locally.
Long winded question is does FOG have a way to do this natively or am I stuck managing rsync scripts on the ‘default’ /images store? There will never be a location-specific image so the image I build in SF for ‘Lenovo X1 Carbon Gen3’ will be used globally.
-
Upgrade to trunk and what you request is already built in.
-
You associate images to storage groups and can choose multiple groups now.
It will sync group->group and it will sync master->subnodes.
Hopefully this helps.
-
@Tom-Elliott Are the multiple groups a feature of trunk? In 1.2.0 I only have a single drop down. Or do I need to maintain X number of exact images depending on the number of SGs I have. Each owned by a different master. The reason I ask is because as soon as I moved my SN to ‘Singapore’ and set ‘is master’ the replication log doesn’t show any replication but it was replicating when all hosts were in the same group.
-
@BardWood In 1.2.0, you may have more than one storage group. You’d create a new group inside of Storage Management (in the web interface), but then you’d need to manually edit the
/etc/exports
file on the server to include the additional directory, you’d need to create a.mntcheck
file both in the root of the new storage location, and a subdirectory calleddev
with a.mntcheck
file in there too. Then you’d need to set permissions tofog:root
recursively on this directory and set 777 perms on it recursively.Please don’t call storage nodes ‘hosts’ - it confuses us and will definitely confuse fog newbies.
-
@Wayne-Workman Maybe I’m missing something. I created a new image for ‘Optiplex7010’ with no group assignment (so, default). The storage node, ‘Singapore’, is master of its own group but never got the updated ‘Optiplex7010’ image from default master (verified it exists on default master and wrote it back to a few machines.). Intra-group syncs are working. If I clear ‘is master’ & move ‘Singapore’ into the default group I can see replication happening in the logs and indeed the image appears on ‘Singapore’. But default master (The ‘normal mode installed’ FOG server) doesn’t do any replication if I move ‘Singapore’ SN back to its own group and check ‘is master’. In this example, group => group isn’t replicating. Is it supposed to?
It sounds like you you are suggesting I create a new group (let’s call it ‘masters’) where I’d leave the default as master and add the Storage Nodes as members. If you were just telling me how to sync my custom dirs, this was just a byproduct of adding more storage volumes in VMWare and I have moved /images to /fog_images on default master without issue. This includes the NFS share | perms on /images & /images/dev | etc/exports setup | and changing the storage path in the web portal. Does this mean a StorageNode can be a member of multiple groups where it is the master of one of them?
-
@BardWood in FOG 1.2.0, storage groups do not replicate to each other, Only storage nodes within the same storage group replicate amongst themselves. Tom explained this already in his first two posts, albeit briefly.
1.2.0 had weird storage group rules and replication rules, and didn’t enforce them either. For instance, in 1.2.0 you could set two nodes as masters in the same group and then disaster would ensue.
Some rules about groups and replication from 1.2.0 if it helps to clear up confusion:
- A storage group can have a master node.
- A storage group can have no more than one master node.
- A storage group must have a master node for replication to occur.
- Replication happens from the master node in a group to other nodes in the same group.
- All uploads to a storage group upload to the master node if one is set.
- Multicast only works if a master node is set, and is only streamed from a master node.
Additional rules for FOG 1.3.0 (which is not yet released as of February 2016 - this is just for future readers):
- An image can belong to more than one storage group.
- If an image belongs to more than one storage group and the requirements for replication in both groups are met, The image will only replicate from it’s original master to the other master, and not the other way.
- The
FOGImageReplicator
now far better controls the replication processes, if this service is stopped, all FTP processes related to replication are also stopped. - There is now very detailed logs concerning replication in FOG’s Log Viewer.
And what I said earlier was due to a misinterpretation of the meaning of your earlier post, apologies. I see what you meant to say now. I don’t think a node can be a member of more than one group in 1.2.0 - but I’m not sure. I haven’t used 1.2.0 really, I just know the major differences between it and the current fog trunk version.
-
@Wayne-Workman Thank you, Wayne. That really does clear things up. It sounds like the easiest thing to do (short of upgrading to trunk) is what I’ve been doing. Do a round of image updates and assign them to default. Clear ‘is master’ from all other storage nodes and move them to default group. Watch the logs for the sync to complete, move them back to their respective groups, recheck ‘is master’ on SNs since I only have a single SN per group. I only update these images a few times per year so that’s really not that painful. I could manage rsync scripts or a manual process but I’d rather let FOG do it especially if 1.3.0 will be along sometime in the not too distant future. Much appreciated!