Scripting keeping multi-site-master-setup images updated.


  • Moderator

    Continuing where George and I left off here: https://forums.fogproject.org/topic/8723/fog-replication-and-best-config-setup-for-multisite-masters/6

    @george1421 Yes, in multi-master setup, all fog servers would have a key-pair. But what I was talking about for this specific thing is only using the “real” master’s keys to get this done. Here’s the flow in my head:

    1. “Installation” of this on non-main masters would simply involve placing a bash script, setting up a cron task, and fetching a copy of the “real” main server’s public key and storing it in some designated place. This bash script would be what is run regularly, it would request a web file from the “real” master.

    2. real master gets a web request for x.x.x.x/imageinfo.php

    3. imageinfo.php queries the db for image information, formats this, sticks it into a variable.

    4. The variable is signed with the real master’s private key. The signature is put in another variable.

    5. The info is surrounded in the html output with a header and footer, like ###begin### and ###end###. The signature is then added to the end of this, with a header and footer as well.

    6. Page is delivered as requested.

    7. The remote server parses the data into a variable, parses the signature into a variable, and checks the data and signature against the public key it has from previously. If the data authenticates, it’s good, and then do further parsing and such and update the local DB with the data.

    Side thought - I think the data should be BASH formatted so if it’s authenticated, it can be written to file, and then included so all the variables just load in the include. This is so no parsing needs done beyond stripping data out between the headers and footers.


  • Senior Developer

    @Wayne-Workman I’m not saying you all are.

    I’m saying how I might approach the situation from a personal mindset.


  • Moderator

    @george1421 said in Scripting keeping multi-site-master-setup images updated.:

    At this time what I was thinking at the time the replicator know what it just replicated and to what storage node it would be seeming trivial to make a URL call against that remote storage node (understand I’m using the word storage node interchangeability with a remote FOG server at this point). If the remote storage node is a full FOG server that remote storage node would process the URL message and update its local database.

    It must be designed so that it cannot be abused. The call must be authenticated somehow to guarantee it’s from the genuine main server.

    I think I could easily code PHP to send post-data that would contain all the information in JSON format, plus a signature for that data. Then the steps I previously mentioned about authentication can still happen.


  • Moderator

    @Tom-Elliott We aren’t asking for a feature. Just talking it out at this point, and maybe we might work on something on the side.


  • Moderator

    Tom, understand this is just a brain storming session right now to throw ideas around. At this point there is nothing we are starting to talk about.

    I’m fairly sure that replication does have a hook now. Mind you it’s based on the nodes/groups rather than replication process itself

    The hook (or attachment point) I was looking for is when the (object) has been fully transmitted to the remote storage node and before it loops back to the top to start replicating the next (object). At this time what I was thinking at the time the replicator know what it just replicated and to what storage node it would be seeming trivial to make a URL call against that remote storage node (understand I’m using the word storage node interchangeability with a remote FOG server at this point). If the remote storage node is a full FOG server that remote storage node would process the URL message and update its local database.

    That all said, hooks work at the server they’re running from. While it is possible to do a hook to perform this, I think it might make more sense to use the “Full server method” but connect to the “main” server. On the main server, create the groups and nodes you need and make your adjustments.

    Yeah, I agree. I kind of covered that point above.

    What will this do? It will allow any entry on the “Main” server (images, groups, hosts, printers, etc…) to be immediately available to ALL storage servers at ALL sites.

    It will make images and snapins available to all storage nodes in the storage group after replication. Today I have a remote FOG server connected using the existing storage node technology. Everything replicates just great using the current methodology. The issue is the remote FOG server technicians can’t see the images sent from HQ until I export the images information from the root FOG server web gui and then import them into the remote FOG server web gui. The goal is to eliminate this step. The second (different) but connected issue is with image replication. Sometimes these remote FOG servers are located beyond a slow WAN link where it might take several hours for the image to get the remote storage node (storage node or FOG server). It’s not so much of an issue for a new image, but if we update an image the way that the replicator currently works someone at the remote FOG server could try to deploy that image even though its partially replicated to the remote site. Using the hook points and remote url calls we could disable the image on the remote FOG server by updating the database and setting the image to disabled in the database and then reendable it once the replication is done. (ideally we would want a replication in progress flag but I’m trying to stay within the framework that has already been setup).

    Because the other nodes are “full on servers”, the ipxe and default.ipxe will be loaded from the proper node.

    Right in this multi master setup the remote FOG servers may or may not be aware there is a superior node in their configuration. The nodes would operate independently of the superior node. All images and snapings can be constructed, tested and then released from the superior node then let the replication process take place to send the [object] and its database information to the subordinate server. Expanding this out you could make a massive FOG server structure with each node being operated independently from the others, and then still retain the concepts of traditional storage nodes, because they have value with on site load balancing.

    I hope I haven’t made this two complex because while I tabled the idea for a while, it has been rolling around in the back of my head. There are others that could benefit from this type of setup and (in my mind) getting this this place from where we are today is just a small jump without any major code alterations, 90% of what we need is already in the box today.

    Ugh this editor has mangled my response, I'm working on trying to get the whole post to show.
    It doesn't like squared brackets around words. It ate half of my post because I used squared brackets!!, Ugh


  • Senior Developer

    @george1421 I’m fairly sure that replication does have a hook now. Mind you it’s based on the nodes/groups rather than replication process itself.

    That all said, hooks work at the server they’re running from. While it is possible to do a hook to perform this, I think it might make more sense to use the “Full server method” but connect to the “main” server. On the main server, create the groups and nodes you need and make your adjustments.

    What will this do? It will allow any entry on the “Main” server (images, groups, hosts, printers, etc…) to be immediately available to ALL storage servers at ALL sites.

    Then just associate the images appropriately to each group/server it’s expected to reside on. This will allow all the nodes to receive the images during replication and when you upload at each site it will upload to it’s “location”. This is the premise, more or less, behind the location plugin too. The only issue, as far as I can see it, would be the pxe item. Meaning, it will pull information from the “main” server (inits, kernels, updates, etc…) rather than from their own location. I could update this to use the location’s information though.

    Because the other nodes are “full on servers”, the ipxe and default.ipxe will be loaded from the proper node.

    Just my own ramblings I suppose.


  • Moderator

    @Tom-Elliott We were talking about how to solve a condition where for technical or political reasons a company has 2 or more standalone FOG server, how could images be developed at a root FOG server and have the images replicated to all other fog servers in the storage group. This can happen today with the current setup with replication and adding the remote FOG servers as “storage nodes” even though they are full fog servers. The singular issue is how do we update the images and snapins tables on the remote FOG servers. I can do this today by exporting the images settings from the root fog server and importing that exported file into the remote fog servers. The issue is this is very manual. So we were discussing how could the root FOG server send messages to the remote FOG servers to instruct the remote FOG servers to update their local images and snapins tables.

    When I was actively thinking about this it was to write a hook for the replicators that would get called when the replicators finished moving one object to the remote nodes. That hook could make a remote url call to that remote FOG server to update its database with the associated image/snapin settings.

    At this point we were talking about the communication format of this remote url call, in that the url call should send data in json format.

    Understand this is just a general discussion of what could be done. I think the last time I talked with you about this idea you said the replicators didn’t have that hook available so that is where my idea stayed (in dream land).


  • Senior Developer

    @george1421 I don’t know what everyone is discussing, but php has native functions do json encoding/decoding.

    It’s literally called:
    json_encode() and json_decode()


  • Moderator

    @Wayne-Workman there should be php libraries to encode and decode this format. Node.js uses this format quite a bit too.


  • Moderator

    @george1421 said in Scripting keeping multi-site-master-setup images updated.:

    I would assume the communication messages should use json for future compatibility reasons

    I just reviewed the JSON format, I can do that, piece of cake.


  • Moderator

    @george1421 said in Scripting keeping multi-site-master-setup images updated.:

    The master node could send a web page call to the remote node to disable the image because of replication and then reenable it once the replication job has completed.

    That’s a great idea. Again though, the communications must be protected. It needs signed, and the server expected to do something needs to authenticate the signature with an on-hand copy of the main server’s public certificate. Otherwise anybody with a web browser can call the URLs and do whatever they want.

    I think an automated script should call the main server every minute or so to check if an image needs disabled or enabled, and if some image definition needs made or updated. It could be done all in one call I would think.


  • Moderator

    I can say I’m a bit out of my element when it comes to this RPC like communications (I would assume the communication messages should use json for future compatibility reasons). I understand the intent of what you are saying. In my mind (back when I was thinking about it), I thought if I could write a hook that would key into when the image replicator finished moving a file to the remote node, it could call a web page on the remote node to insert the db record for the image it just transferred. For a traditional storage node the web page may just return true or ignore the url call, but a real FOG server would process the web page call and add the image definition to the remote FOG server.

    Along the same lines if you are working with a multi-site fog install and your site to site links are slow (i.e. 1.5 MB MPLS links) it may take days for the image to replicate from your HQ site to your remote site, some how that image needs to be blocked from deployment while the image is being replicated. The master node could send a web page call to the remote node to disable the image because of replication and then reenable it once the replication job has completed.


Log in to reply
 

787
Online

38748
Users

10575
Topics

100102
Posts

Looks like your connection to FOG Project was lost, please wait while we try to reconnect.