Master -Master Replication



  • Hello,

    I have 2 fog servers running the latest SVN both normal installations not Storage nodes. The dog servers are at different locations with VPN’s between for communication. Is it possible to replicate the images from my primary site to my sub locations so my tech at the sub location can image computers with my gold master image?


  • Moderator

    @george1421 said in Master -Master Replication:

    Right now FOG doesn’t have a real good way to limit what techs can do at each location. For example we would only want IT techs at site A to be able to image machines at site A and not site B. As well as the other way around. If you had FOG setup to automatically PXE boot into FOG, techs at site A could accidentally reimage a machine at site B.

    Sounds like a great opportunity to solve a problem, to me. I could write a PHP service that would just undo any tasking that a tech does at a wrong site. In fact - I can write a stand-beside fog thingy that will let you assign locations to fog users. It can be a non-required extra field in the users table, and won’t affect FOG functionality.

    At work, we don’t have problems with bandwidth between sites. It’s 1Gbps throughout 23 buildings separated by miles each. But we do have a potential problem with is techs doing things they shouldn’t be doing.


  • Moderator

    @Wayne-Workman I can see certain instances where this multi-master setup would be desirable (speaking from my experiences not the OPs).

    1. If you had 200 devices at the remote site and 200 at the HQ site with a MPLS (1.5Mb) link between the sites. You wouldn’t want all of those clients from the remote site pinging the master node at HQ across the mpls link.
    2. If your master node was at HQ you would be reliant on the link between the sites to remain operational to image systems at the remote site.
    3. Right now FOG doesn’t have a real good way to limit what techs can do at each location. For example we would only want IT techs at site A to be able to image machines at site A and not site B. As well as the other way around. If you had FOG setup to automatically PXE boot into FOG, techs at site A could accidentally reimage a machine at site B.

    This is just my opinion though.


  • Moderator

    @daddy_duke12 I think what you should do instead of having two full fog servers - is to setup just a storage node at the remote site. This is how fog is intended to work. What you’re wanting to do - while doable - isn’t standard at all. Anyone that comes in behind you is going to be lost. If you come here for help, we’re going to assume the wrong things and give wrong advice - to you and anyone that comes in behind you.

    There’s a thing called the location plugin that is designed specifically for your problem, many people use it. Here’s a article (including video) on that:
    https://wiki.fogproject.org/wiki/index.php?title=Location_Plugin
    It covers the topic of replication and the location plugin.
    Keep in mind - any registered computers at this other location can be exported, and imported into the main web server at HQ - via the web interface.



  • @george1421 You sir… Answered all my questions

    Thank you so much


  • Moderator

    @george1421 Understand in this mult-master configuration only the images (and snapins if you use them) will be copied between the systems. You will have two independent hosts databases and FOG servers. You won’t need to use the location plugin since you will register the hosts directly with each master node as you set them up. Understand these are equal peers in this setup.

    You can also use the disable replication and enable image check boxes in the image definition it self to control if a specific image is replicated between the root node and the remote master nodes.


  • Moderator

    Yes it is. This is what I call a Multi-Master setup. Understand this is NOT a supported configuration but will work with a caveat. (FWIW this is how I have my dev environment connected to my production environment. I build and test the images in dev and then replicate them to the prod servers when ready).

    But anyway, you need to pick one node to be your root node in the multi-master configuration. This will probably your HQ node. On your HQ node (in the web gui) create a storage group, add in your root node into that storage group. Then create a slave node in that storage group. Give that slave node the configuration for the remote master node. You will need to ensure the fog (linux) users is setup correctly. Since the root node will need to connect to the remote master node for image creation. Once that is setup you can either wait or just restart the fog replication service and you should see the images start to populate in the /images folder on the remote master node.

    Now for the caveat, the root master node doesn’t update the image database on the remote master node so you will need to export your image configurations from the root node (with the web gui) and then import them into the remote master node via its web gui. If you have skills you can actually create a cron job to do this, but that is a bit beyond this specific question. If you only update images on your root node then you don’t have to do anything, if you add new images to your root node you will then need to export and import your configuration (or just copy the settings by hand between the to master server)


  • Senior Developer

    I don’t understand the question. From the title it’s asking Master Node to Master Node replication, which leads me to believe you have at least two groups?

    If this is the case, all images and snapins can be associated to multiple storage groups. They also will need to be told which group is the “primary” group. The Primary Group Master will forward the file to the other group’s master node. Then the master node’s replicate to the rest of their group’s subordinate nodes.

    I think what you’re asking is already done.


Log in to reply
 

Looks like your connection to FOG Project was lost, please wait while we try to reconnect.