Create the concept of a ForeignMasterStorage (deployment) node
-
@Wayne-Workman point well taken.
I’m not really interested in creating a mishmash of scripts to do crazy things. I can see what needs to be done to make this work as FOG is currently designed.
I’ve spent some time recreating my POC environment and have a mostly workable system using the current SVN. Based on the results of my testing I changed the a word in the title of this feature request to foreign master storage node from slave, because it sounds much cooler and is a bit more accurate.
All joking aside. I found if I create 3 storage groups which represent 3 different sites each with their own master storage node and then in the center storage group make the master storage node from the left and right storage groups a “storage node” or to use my made up name “Foreign Master Storage node” in the center storage group I can send the images from a central master storage node to all other storage nodes in the other storage groups. (its a bit hard to explain with just words, but it does work). Eventually each storage group will be located at a different site, so I need a fully functional master node in each storage group.
I did find an interesting fact, I seeded the center master storage node with images from my production server, but the replication did not start until I created the first image entry in the database. Then the files were replicated from the center Master Storage node to the other Foreign Master Storage nodes. The issue I’m at right now is that I need to get the content from the images and snapins table to both the left and right Foreign Master Storage nodes or they won’t start replicating to their storage nodes.
-
@george1421 A scripting solution could keep just these two tables updated. I suppose you could create a plugin that does it?
-
@george1421 I’m still confused.
In trunk you can setup multiple storage groups for both snapins and images. You also specify, now, which storage group is the primary/master group for the snapin or image.
This will do the same thing you’re requiring. It will replicate to other storage groups from the primary storage group as assigned by the master.
Doing this, you would not need to create the storage nodes under the primary group as you’ve described.
Tie this with the location plugin and I believe you would have everything you’ve described.
-
@Tom-Elliott said:
@george1421 I’m still confused.
Its highly possible that I’m ignorant to the features you have added to the trunk builds, plus I’m not doing a good job of explaining the current situation were I think FOG is highly capable to accomplish this with a few adjustments. I’ve looked through the wiki to see if there was something similar to what I need to do. The only thing that came close was https://wiki.fogproject.org/wiki/index.php/Managing_FOG#Storage_Management (the second graphic that shows the multiple storage groups). This is the POC concept used to setup my test environment.
I took that previous drawing and build this sample layout.
In this scenario I have these requirements (almost sounds like a school project):
- Will be constructed with 3 or more sites
- Connection to each site will be via a connected via a MPLS 1.5Mb/s link
- Because of the slow link each site must have its own FOG Deployment server to provide PXE booting
- Each of the sites could have one or more VLANs each with their own subnets isolated by a router.
- Corporate images will be created at the HQ site and distributed to all sites. There is a potential that each site could have their own images for specific purposes. So each site must be able to capture images to their local deployment server.
- On a corporate deployed image there may be a reason to recall or block deployment of a specific image across the organization (such as a detected flaw in the image).
- The location plugin is installed on all FOG servers. The only location that will have more than one locally defined location is LA
To clarify the above picture:
In the HQ location there is only one deployment server HQMasterNode
The LAMasterNode and ATLMasterNode are connected back to HQ via a MPLS link (right now this is all done in a single virtual environment)
In the LA site there are 3 FOG servers. One FOG deployment server, One FOG storage server and One FOG Storage server with PXE booting enabled (I think that is an option). The LA site also has two VLANs with about 700 nodes distributed across the VLANs. There are two defined locations for the LA site (LA_BLD01 and LA_BLD02)
The ATL site only has one FOG Deployment server and one storage node on a single subnet.This is how I have the test environment built in my test lab.
As I posted before I seeded the images in the HQMasterNode with images from my production FOG server. No replication happened between the HQMasterNode, LAMasterNode or ATLMasterNode until I created the first image definition on the HQMasterNode. Once that first image definition was created all images that were seeded on the HQMasterNode were replicated to the other two nodes in the HQ Storage Group. This worked great, now all image created on the HQMasterNode were located at the site FOG Deployment servers. The images did not get distributed beyond each sites MasterNode though. On the ATLMasterNode I created a single image definition and then the images were replicated to the ATLSlaveNode01.
The first issue I ran into was even though I created all of the image definitions on the HQMasterNode those definitions were not copied to the LAMasterNode or the ATLMasterNode. Somehow I need to get those definitions (I’ll assume the same for the snapins) from the HQ deployment server to each site’s deployment server. This could be accomplished with a mysqldump of the tables before the replication starts and then picked up at the remote end and an mysqlimport run. Or by making url calls to each of the sites deployment servers to update their database with the image information.
-
Knowing what you know about the new features built into the SVN trunk, can I do this without any new “stuff” being added to FOG?
-
In simple of terms as I can muster, YES!
-
Excellent…
-
@george1421 said:
1.5Mb/s
Tom is right, it will work.
But I wanted to point out that a typical 16GB (compressed size) image, pushing one copy of the image to one other node across a 1.5Mb/s link will take roughly 24 hours, and that’s if you have 100% of the 1.5Mb/s dedicated to the transfer.
Have you thought about this? How big are your images?
-
@Wayne-Workman said:
But I wanted to point out that a typical 16GB (compressed size) image, pushing one copy of the image to one other node across a 1.5Mb/s link will take roughly 24 hours, and that’s if you have 100% of the 1.5Mb/s dedicated to the transfer.
Have you thought about this? How big are your images?
I selected a network connection specifically that was artificiality low for the POC. I see network latency being a real issue with a distributed design.
Our thin image (Win7 only+updates) are about 5GB in size and our fat image is over 15GB. At 1.5Mb/s I would suspect that we would have ftp transfer issues with file moves that were taking longer than 24hrs to complete. But that is only a speculation.
Its good to hear that FOG could do this without any changes.
-
If you are not updating images that often it might be more logical to sneaker-net images to the other site we you make changes.
-
@Joseph-Hales said:
If you are not updating images that often it might be more logical to sneaker-net images to the other site we you make changes.
Good point, it just may be easier and quicker to throw the image on a flash drive and overnight it to the other sites if transfer speed is required. But then there is more hands on steps at each site to import the image and create the DB entries.
While its clear that the current FOG trunk can do this, but right now the how is missing from this discussion.
-
@george1421 said:
While its clear that the current FOG trunk can do this, but right now the how is missing from this discussion.
For the sneaker net or for the setup you illustrated below?
-
The how is to enable the Location Plugin. (in the case of having fog automate the stuff for you)
-
At the risk of extending this feature request even more…
Please understand I’m not trying to be difficult, I truly want to understand if what I want to do is possible. I think we have a communication misalignment. I’m not doing a very good job explaining the situation because I keep seeing the same results (maybe that is the only answer, I don’t know).
But I’m assuming from your context that in my drawing below there is one full deployment server in that network with the rest storage nodes. Is that a correct assumption?
I understand the function of the location plugin, It allows you to assign storage groups and storage devices to a location and then you link a hosts to a location so it knows where to get and put (if necessary) an image to. I get that. I’ve been using FOG for quite a while.
The issue(s) I’m seeing here are this:
- The storage nodes are not a fully functional deployment server. They are missing the tftpboot directory. While they do have the pxe boot kernel and file system, they alone can not provide pxe booting services for a remote site.
- The storage nodes do not appear to have a sql server instance running so I assume they are reaching out to the master node’s database for each transaction. Historically I’ve seen this being an issue with other products as they try to reach across WAN links for transactional data.
- There is no local web interface on the storage nodes. So all deployment techs from every site must interface with the HQ Master node. This shouldn’t be an issue since the web interface is very lite as apposed to some other flash or silverlight base management consoles.
- While this is not a technical issue, its more of a people issue. Since you will have techs from every site interfaces with a single management node its possible for one tech to mistakenly deploy (i.e. mess up) hosts at another site since there is no built in (location awareness) in regards to their user accounts.
- On the deployed hosts, where does the fog service connect to? Is it the local storage node or the Master node?
- Storage nodes can only replicate with the master node. i.e. if there are two storage notes at a remote site, one storage node can not get its image files from the other storage node at that site. All images must be pulled across the WAN for each storage node.
- Multicasting is only functional from the Master node. So in the diagram below only the HQ could use multicasting to build its clients. (edit: added based on a current unrelated thread)
The fog system is very versatile and you guys have put a LOT of effort into it since the 0.3x days. And you should be acknowledged for your efforts. Understand I’m not knocking the system that has been created or your time spent on the project.
I worked through this post, I can see that having a single master node with the rest storage nodes would work if:
- The /tftpboot directory was included in the replication files from the master node and the tftp service setup in xinet. (actually this could be built in as part of a storage node deployment by default, by having the service and tftpboot folder setup, even if it isn’t used in every deployment. There is no down side IMO)
- The user profile was location aware to keep them from making changes to hosts in other locations. The location awareness must have the ability to assign users who have global access for administration purposes.
- The storage nodes would have to be aware of latency issues with slow WAN links. And/or not break completely with momentary WAN outages.
-
Tom, I was linked to this thread by the OP and I am in the exact same position. Wayne is 100% correct in what I need and I believe that to be what George1421 needs too.
Lets break it down simply.
- Create image on “Master” server.
- Replicate image to all other storage nodes in the same group.
- Update the remote servers DB to reflect what the “Master” server just copied.
Steps 1 and 2 work fine but there doesnt appear to be a way to do step 3 automatically. This would not be such a major issue if I were able to manually create the image definition at each site, but when I try I am presented with nothing but a white screen saying “add image definition” on the top left and absolutely nothing more on the screen.
I dont want to export/import mysql DB files from the “Master” to the remote sites, I have been doing that for years with .32 and its not a very good practice. Simply updating the remote mysql tables to reflect the images that were just copied should not be a huge task for your software to perform.
Does that explain what I am and also I believe George1421 to be looking for?
-
I can tell you through testing I know this so far.
- You can do this mostly with a storage node and the location plugin.
- The storage nodes don’t have the bits required for tftp to work
Some caveats to what I just said.
- The storage nodes are storage nodes only. You can add the tftp service description for xinetd and the tftp files. But the storage node is not a fully deployment node. There is no user interface, all of the techs must access the master node to deploy images to the remote locations.
- The storage nodes do not have a local mysql database (as far as I can see). They connect back to the master deployment node to access its database. I see this as being an issue with latency when crossing a WAN link.
I have been thinking of ways to map this out to do what ( I ) need it to do, but it would be one off and fragile at best. The best solution is to have this done natively within the program and not use any external hacks.
-
@george1421 said:
The best solution is to have this done natively within the program and not use any external hacks.
Linux is the largest collection of hacks in any one spot ever lol.
So,
I’m strongly against the @Developers making such drastic code-base changes. I want to see a 1.3.0 release soon and this will not only delay the release, but most likely create a slew of bugs that need worked out… again… But they can do as they please.
The new FOG Client that is being developed by @Jbob runs on GUI or CLI only linux, just about every single distribution you can think of. It is able to deploy snapins to linux without issue, I’ve witnessed it (his nightly builds, they are not stable or in FOG Trunk currently).
I would suggest we pool our knowledge to just create some base-level scripts that will sync two DBs based on the exact same rules that the FOGImageReplicator follows. I’ve outlined these rules before in other threads.
Once the script is developed, we can make it into a sourceforge project. People can deploy the script via Snapins to the remote storage nodes themselves to update the DBs using the web interface.
-
@Wayne-Workman said:
I would suggest we pool our knowledge to just create some base-level scripts that will sync two DBs based on the exact same rules that the FOGImageReplicator follows. I’ve outlined these rules before in other threads.
I’m not thinking anything drastic. Its more like how pfsense sends http calls to a remote node to sync its configuration data. While its a bit deeper discussion that we should do here. The idea would be for the FOGReplicator to move the files as they do today. When all of the files in the current image directory have been moved, then make a http call to a php page the remote node (it should already know everything it needs to know to do this [i.e. no new database fields]) which adds the image information to the remote database.