Tom, understand this is just a brain storming session right now to throw ideas around. At this point there is nothing we are starting to talk about.
I’m fairly sure that replication does have a hook now. Mind you it’s based on the nodes/groups rather than replication process itself
The hook (or attachment point) I was looking for is when the (object) has been fully transmitted to the remote storage node and before it loops back to the top to start replicating the next (object). At this time what I was thinking at the time the replicator know what it just replicated and to what storage node it would be seeming trivial to make a URL call against that remote storage node (understand I’m using the word storage node interchangeability with a remote FOG server at this point). If the remote storage node is a full FOG server that remote storage node would process the URL message and update its local database.
That all said, hooks work at the server they’re running from. While it is possible to do a hook to perform this, I think it might make more sense to use the “Full server method” but connect to the “main” server. On the main server, create the groups and nodes you need and make your adjustments.
Yeah, I agree. I kind of covered that point above.
What will this do? It will allow any entry on the “Main” server (images, groups, hosts, printers, etc…) to be immediately available to ALL storage servers at ALL sites.
It will make images and snapins available to all storage nodes in the storage group after replication. Today I have a remote FOG server connected using the existing storage node technology. Everything replicates just great using the current methodology. The issue is the remote FOG server technicians can’t see the images sent from HQ until I export the images information from the root FOG server web gui and then import them into the remote FOG server web gui. The goal is to eliminate this step. The second (different) but connected issue is with image replication. Sometimes these remote FOG servers are located beyond a slow WAN link where it might take several hours for the image to get the remote storage node (storage node or FOG server). It’s not so much of an issue for a new image, but if we update an image the way that the replicator currently works someone at the remote FOG server could try to deploy that image even though its partially replicated to the remote site. Using the hook points and remote url calls we could disable the image on the remote FOG server by updating the database and setting the image to disabled in the database and then reendable it once the replication is done. (ideally we would want a replication in progress flag but I’m trying to stay within the framework that has already been setup).
Because the other nodes are “full on servers”, the ipxe and default.ipxe will be loaded from the proper node.
Right in this multi master setup the remote FOG servers may or may not be aware there is a superior node in their configuration. The nodes would operate independently of the superior node. All images and snapings can be constructed, tested and then released from the superior node then let the replication process take place to send the [object] and its database information to the subordinate server. Expanding this out you could make a massive FOG server structure with each node being operated independently from the others, and then still retain the concepts of traditional storage nodes, because they have value with on site load balancing.
I hope I haven’t made this two complex because while I tabled the idea for a while, it has been rolling around in the back of my head. There are others that could benefit from this type of setup and (in my mind) getting this this place from where we are today is just a small jump without any major code alterations, 90% of what we need is already in the box today.
Ugh this editor has mangled my response, I'm working on trying to get the whole post to show.
It doesn't like squared brackets around words. It ate half of my post because I used squared brackets!!, Ugh