Create the concept of a ForeignMasterStorage (deployment) node
-
@Tom-Elliott I think he’s referring to keeping full fog DBs on each server, and keeping them synced.
-
@Wayne-Workman probably but he kind of covers both sides. What he first describes appears to be location in a nutshell.
-
Unless I’m missing something I do think FOG is pretty close to what I’m looking to implement.
My perspective is looking at the Master Slave setup as two FOG servers isolated by a VPN connection at different sites. We would want each site’s clients to contact their own local FOG server. All images and snapins would be created and managed from the master node and then replicated via the storage node transfer that is already built in. The bits that are missing is to get global information/reports about all defined hosts from a single console and to schedule deployments from the master or site specific slave node to any client computer. This is a bit more that the storage node is capable of doing right now.
-
@george1421 What do you mean by global information ? or reports?
-
@george1421 I don’t understand. Storage nodes can be setup however you want.
What location does is create a way to tell hosts how and what to call when imaging. For client side just set the client to point at that local node to get the relevant stuff all while looking at that same node. This how/what is more or less implicitly telling the host where to download it’s init and kernel if the tftp option is selected.
Locations do not need to be node specific either. They can be assigned to a storage group so you have a kind of load balancing as well.
I even believe it’s smart enough to tell the host where to upload the image as well.
-
@Wayne-Workman said:
@george1421 What do you mean by global information ? or reports?
I’m trying to think big picture here, but lets say I want to see all deployments on both the master and slave servers across the company. If the FOG servers are not linked in some manner I would have to log into each FOG server to run the built in report to get the deployments. Or if I wanted to get an inventory list of systems vs deployed images for every computer on every FOG server. How could I go about it with the current capabilities?
-
There are a lot of complications to what you’re wanting to do… having DB independence means having a full fog server at each site. But then you run into the issues of syncing the DB.
And syncing the DB is only required as far as creating/updating image definitions. This could probably be scripted with Cron, and would require remote access to all the MySQL instances on all the servers…
Each fog server will be trying to perform replication among the masters/slaves… so you’d have to totally disable that service on all servers except for one.
You’d still need the location plugin in order to define to clients where to pull images from, where to upload to, and so on. You’d need to define your storage nodes, groups, masters/slaves identically on all servers…
As far as running reports, you can look at the SQL underneath the various fog buttons (it is open source after-all). You can enable remote MySQL access from a list of specified IP addresses (for security) and create a script that will pull the reports you want. You could even have a little virtual machine running FOG, and just change the settings in /opt/fog/.fogsettings for each site you want to work with, for each site report you want to run.
But,
To be totally honest, Tom has a really strong point here… All of this craziness is not necessary. There are several multi-site organizations that use the standard setup with location plugin just fine. They have WAN limitations too. Some go as far as a full server at each location but having the DB settings pointed to the main server. The provided setup does work, and what you’re wanting to do would create a massive amount of oversight and work that probably very few could follow in your footsteps and do confidently.
I mean, Linux and FOG is pretty foreign to most I.T. people already… Imagine the guy (or gal) that comes in behind you? They would absolutely hate FOG because of how complex they perceive it to be… how fast it would break due to their inaction, or simply following advice they see here on the forums or in the WiKi… Advice that won’t work because this setup is so dramatically customized.
I mean… if the WAN goes down… are you going to be worried about imaging computers? Nope… And do you actually know the bandwidth load that MySQL would create for 100 or 1,000 or 5,000 computers? It’s probably pretty low… after all, it’s just text.
My vote is… don’t create a massive monster that nobody but you can tame.
-
@Wayne-Workman point well taken.
I’m not really interested in creating a mishmash of scripts to do crazy things. I can see what needs to be done to make this work as FOG is currently designed.
I’ve spent some time recreating my POC environment and have a mostly workable system using the current SVN. Based on the results of my testing I changed the a word in the title of this feature request to foreign master storage node from slave, because it sounds much cooler and is a bit more accurate.
All joking aside. I found if I create 3 storage groups which represent 3 different sites each with their own master storage node and then in the center storage group make the master storage node from the left and right storage groups a “storage node” or to use my made up name “Foreign Master Storage node” in the center storage group I can send the images from a central master storage node to all other storage nodes in the other storage groups. (its a bit hard to explain with just words, but it does work). Eventually each storage group will be located at a different site, so I need a fully functional master node in each storage group.
I did find an interesting fact, I seeded the center master storage node with images from my production server, but the replication did not start until I created the first image entry in the database. Then the files were replicated from the center Master Storage node to the other Foreign Master Storage nodes. The issue I’m at right now is that I need to get the content from the images and snapins table to both the left and right Foreign Master Storage nodes or they won’t start replicating to their storage nodes.
-
@george1421 A scripting solution could keep just these two tables updated. I suppose you could create a plugin that does it?
-
@george1421 I’m still confused.
In trunk you can setup multiple storage groups for both snapins and images. You also specify, now, which storage group is the primary/master group for the snapin or image.
This will do the same thing you’re requiring. It will replicate to other storage groups from the primary storage group as assigned by the master.
Doing this, you would not need to create the storage nodes under the primary group as you’ve described.
Tie this with the location plugin and I believe you would have everything you’ve described.
-
@Tom-Elliott said:
@george1421 I’m still confused.
Its highly possible that I’m ignorant to the features you have added to the trunk builds, plus I’m not doing a good job of explaining the current situation were I think FOG is highly capable to accomplish this with a few adjustments. I’ve looked through the wiki to see if there was something similar to what I need to do. The only thing that came close was https://wiki.fogproject.org/wiki/index.php/Managing_FOG#Storage_Management (the second graphic that shows the multiple storage groups). This is the POC concept used to setup my test environment.
I took that previous drawing and build this sample layout.
In this scenario I have these requirements (almost sounds like a school project):
- Will be constructed with 3 or more sites
- Connection to each site will be via a connected via a MPLS 1.5Mb/s link
- Because of the slow link each site must have its own FOG Deployment server to provide PXE booting
- Each of the sites could have one or more VLANs each with their own subnets isolated by a router.
- Corporate images will be created at the HQ site and distributed to all sites. There is a potential that each site could have their own images for specific purposes. So each site must be able to capture images to their local deployment server.
- On a corporate deployed image there may be a reason to recall or block deployment of a specific image across the organization (such as a detected flaw in the image).
- The location plugin is installed on all FOG servers. The only location that will have more than one locally defined location is LA
To clarify the above picture:
In the HQ location there is only one deployment server HQMasterNode
The LAMasterNode and ATLMasterNode are connected back to HQ via a MPLS link (right now this is all done in a single virtual environment)
In the LA site there are 3 FOG servers. One FOG deployment server, One FOG storage server and One FOG Storage server with PXE booting enabled (I think that is an option). The LA site also has two VLANs with about 700 nodes distributed across the VLANs. There are two defined locations for the LA site (LA_BLD01 and LA_BLD02)
The ATL site only has one FOG Deployment server and one storage node on a single subnet.This is how I have the test environment built in my test lab.
As I posted before I seeded the images in the HQMasterNode with images from my production FOG server. No replication happened between the HQMasterNode, LAMasterNode or ATLMasterNode until I created the first image definition on the HQMasterNode. Once that first image definition was created all images that were seeded on the HQMasterNode were replicated to the other two nodes in the HQ Storage Group. This worked great, now all image created on the HQMasterNode were located at the site FOG Deployment servers. The images did not get distributed beyond each sites MasterNode though. On the ATLMasterNode I created a single image definition and then the images were replicated to the ATLSlaveNode01.
The first issue I ran into was even though I created all of the image definitions on the HQMasterNode those definitions were not copied to the LAMasterNode or the ATLMasterNode. Somehow I need to get those definitions (I’ll assume the same for the snapins) from the HQ deployment server to each site’s deployment server. This could be accomplished with a mysqldump of the tables before the replication starts and then picked up at the remote end and an mysqlimport run. Or by making url calls to each of the sites deployment servers to update their database with the image information.
-
Knowing what you know about the new features built into the SVN trunk, can I do this without any new “stuff” being added to FOG?
-
In simple of terms as I can muster, YES!
-
Excellent…
-
@george1421 said:
1.5Mb/s
Tom is right, it will work.
But I wanted to point out that a typical 16GB (compressed size) image, pushing one copy of the image to one other node across a 1.5Mb/s link will take roughly 24 hours, and that’s if you have 100% of the 1.5Mb/s dedicated to the transfer.
Have you thought about this? How big are your images?
-
@Wayne-Workman said:
But I wanted to point out that a typical 16GB (compressed size) image, pushing one copy of the image to one other node across a 1.5Mb/s link will take roughly 24 hours, and that’s if you have 100% of the 1.5Mb/s dedicated to the transfer.
Have you thought about this? How big are your images?
I selected a network connection specifically that was artificiality low for the POC. I see network latency being a real issue with a distributed design.
Our thin image (Win7 only+updates) are about 5GB in size and our fat image is over 15GB. At 1.5Mb/s I would suspect that we would have ftp transfer issues with file moves that were taking longer than 24hrs to complete. But that is only a speculation.
Its good to hear that FOG could do this without any changes.
-
If you are not updating images that often it might be more logical to sneaker-net images to the other site we you make changes.
-
@Joseph-Hales said:
If you are not updating images that often it might be more logical to sneaker-net images to the other site we you make changes.
Good point, it just may be easier and quicker to throw the image on a flash drive and overnight it to the other sites if transfer speed is required. But then there is more hands on steps at each site to import the image and create the DB entries.
While its clear that the current FOG trunk can do this, but right now the how is missing from this discussion.
-
@george1421 said:
While its clear that the current FOG trunk can do this, but right now the how is missing from this discussion.
For the sneaker net or for the setup you illustrated below?
-
The how is to enable the Location Plugin. (in the case of having fog automate the stuff for you)