FOG Storage Relication
-
The images are stored directly on the primary FOG server. Basically what I am asking, is it possible to have multiple FOG servers connecting to the same storage node? Its fine for site2 to write images to the one storage node. It would just be nice to maintain the images from one site and it automatically replicate to the other sites.
To answer the question, these are running on a VM on our production blade center
-
@quinniedid I understand what you are asking but your terms are confused according to how fog operates.
The FOG Universe has Full FOG servers (called Normal during installation) and FOG Storage nodes (called Storage Node during installation). The only (real) difference between the two are that the Full FOG Server has a mysql database installed where the Storage Node install requires access to a Full FOG server to operate. The FOG Storage Node doesn’t have a webgui, only a Full FOG server has a webgui.
Now you can create a storage group that must have one Full FOG server and one or more storage nodes (or other Full FOG servers for that matter). Images only replicated from the Master Node Fog server in the storage group to non Master nodes in the storage group. Only the master node fog servers can be targets of images being captured. (while this isn’t 100% correct) storage nodes are mostly read-only.
So now to what I think you want since you have 2 Full FOG servers one at each site.
At the HQ FOG server create a storage group and add the HQ FOG server as a master node. In that same storage group add the Remote FOG server as a non-master node. Note, you will need to have the
fog
service account password (from /opt/fogproject/.fogsettings file on the remote fog server) when setting up the non-master node on the HQ fog server. With this setup, all images from the HQ FOG server will be replicated to the Remote FOG server. You can control which images replicate or not based on the check box in each image definition that enables replication or not. This checkbox is enabled by default.The above will solve half of your issue by replicating the physical files to the Remote FOG server. What is missing is the meta data that is stored in the FOG database. You will have to manually copy the image definitions between your fog servers using the image definition export and import function. Its not the cleanest, but FOG wasn’t really designed to work this way. It does work well, but not as integrated as we’d like.
-
@george1421 I understand now. I thought the metadata stayed with the image and with that not being the case it would create a lot more work in this case.
I appreciate the clarification and I appreciate you taking the time to answer my questions.
Again Thank you!
-
@quinniedid Well its not really a lot more work. All you need to do to export the meta data is to go into the web gui on the HQ fog server and pick export, then call up the remote fog server webgui and pick import.
But what you really need to think is how often your golden images update. And your golden image where to update would you create a new image definition for the updated image or just recapture over the same image definition? One approach creates new metadata and new image files and the other just updates the image files.
-
@george1421 Oh that is not bad at all than! We only update our images one every month or two, typically. We do override the same image as well.
With that being said would you have to export and import every time we change the image or would this be when we create a new image all together?
-
@quinniedid Only a new image creation would require the export/import. The updated data files would replicate with each new upload automatically. And newly created images files will replicate automatically, just the remote fog server IT admins won’t be able to see them until you created the image definitions on the remote FOG server.
-
@george1421 Perfect! Exactly what I was after! Thanks again!
-
@quinniedid said in FOG Storage Relication:
is it possible to have multiple FOG servers connecting to the same storage node?
I think perhaps because you are new to FOG, you have terms mixed up. This is understandable. A fog storage node is one that does not have a web interface. It can respond to TFTP requests (network booting) and imaging tasks. It connects to the FOG Database remotely. It also is able to carry out tasks as requested by the ‘master’ fog server. For example, if the master fog server deploys an image to a host that is local to this storage node (and the location plugin is configured), that host would use the remote storage node to image with. Another thing to understand is that all image captures always go to the primary storage group’s master node of that image except in the case where the location plugin is configured. That’s a mouth full, yes. This article helps explain. So you can control everything from a single fog server - this is part of FOG’s design. But a ‘read only’ node is not. All nodes are under the command of the master fog server - and any one of them could be configured as a master in a storage group, any one of them could be a non-master in another storage group - any one of them could be a member of TWO storage groups. The way you organize your groups and your masters determines the direction of replication, and the behavior of replication. Images can also belong to multiple storage groups, which further makes the replication model even more flexible. There’s also the ‘multi-master’ implementation that is not officially supported but many people have chosen to use.
I should ask you some specific questions that can help us understand exactly what your needs are. Please try to answer each.
- Is the replication link slower than 1Gbps?
- Do you plan to capture all images from one location and use them at all locations?
- Do you want to limit what images are replicated where, or do you want them all replicated to all locations?
- Do you have control of DHCP at all locations?
- Do you have employees at these various locations that you want to restrict to only their locations?
-
@Wayne-Workman
I will have to check out that link and video.Answers to your questions as follows:
- The replication link is slower probably about 20 to 40 mbps
- We plan to capture the images from one location and use them at their specific locations. Some images will be for the other side and some will be for this site. We will have some images that are shared between the two sites
- We will definitely want to limit which images
- I have complete control of DHCP at both sites
- This is not absolutely necessary but it would be nice to have for sure
Also, what port/protocol is used for storage replication or needed in this scenario?
-
@quinniedid said in FOG Storage Relication:
Also, what port/protocol is used for storage replication or needed in this scenario?
Depends on how you choose to set it up. Using the multi-master configuration that George explained, you just need FTP for replication and port 80 for the PHP scripts on the remote nodes to listen for requests from the master server on.
But, with a 20 to 40Mbps link between your sites, that’s enough to just use fog in the standard way - with one master server and other storage nodes at your remote sites - and you can accomplish sending specific images to specific locations. You’d setup the location plugin - and you’d use FOG’s group-to-group image sharing. There’s no need to export/import anything in this setup and it’s officially supported.