FOG DEPLOYMENT - STORAGE NODE PREP
I wanted to post and get some assistance/feedback from the FOG community regarding deploying FOG to a larger environment.
400+ Remote Sites (each with a storage node)
I know there are some large sites by endpoint count out there, however not with 406 remote sites that each have a storage node.
https://wiki.fogproject.org/wiki/index.php?title=Testimonials (See: Madison Metropolitan School District)
What is the best way to configure 400+ storage nodes ? I was thinking of using FOG to create an image of a storage node, image the nodes and then reconfigure each with the correct site IP. This unfortunately seems time consuming and in-efficient. Any advice or tricks are welcomed.
@steveo (I’m only considering creating fog storage nodes using fog) well since you have so many and they are all physical there should be a way to use FOG to clone the storage nodes. I need to think about the best approach here. But you should be able to install the OS, and at least download the fog repo to the system.
Here is where it get into some speculation.
Now when you install fog, it creates a .fogsettings file. The .fogsettings file lists all of answers to the questions you supplied when you first installed fog. So when you reinstall fog like during an upgrade the installer will reference that file instead of asking the user questions over again. I wonder if we can leverage that by dropping that file in the proper location during storage node image deployment. Then running the installer with the -y command post deployment?? The only caveat here is that you should only install FOG once the storage node has its forever IP address. There is a script to reset the IP address after the fact to. Its just depends on what ever is easiest. You also should probably consider how you will seed the storage node before you send it out to the remote location.
Also during image deployment, you can have the fog master node give the fog storage node its name. Once your storage nodes have been created you won’t need them in the fog database unless you want to reimage them again.
With that many storage nodes, you might consider setting up a dedicated mysql server. Remember that storage nodes don’t have a local database, it uses the database on the master (root) fog server. That will be 400 open connections.
Hi @george1421 , they will all be physical HP6000 SFF units running as storage nodes. (I imagine managing and replicating all settings across 400 full fog servers will be an administrative nightmare with overhead that I simply cannot afford, the requirements are such that the limitations of a storage node are not a factor in the use case for this implementation)
@steveo So what have you decided to do about the storage nodes at each location? Are they going to be physical or virtual? Will they be full fog servers or fog storage nodes?
If they are going to be physical, will they all be the same or similar models?
Steveo last edited by Steveo
This has taken a little longer than expected to get to this point. I am happy to report that all our testing indicates this will work. (Minus some sites with slower links). This does however now push me towards the next steps - Mass preparation of storage nodes.
What are your thoughts going forward?
Yes, the idea is to ultimately schedule imaging from HQ in an-attendant fashion with 1 image for all. (I have assigned a team member to build and test a hardware independent image, I should have the results closer to the end of the week)
You mentioned a few limitations that I did not realize existed, however still not major stumble blocks.
For now I suppose we need to prove that we can PXE boot in a production site. (My experience with FOG, albeit somewhat limited, leads me to believe this will work)
- Prepare replacement machines that are ready to be used in event of failure
- Prepare and Install standalone FOG Server on site (Pre-Staged with Images)
*Change PXE Settings (DHCP)
- Enable PXE boot on target machines
Progress from this point onward…
@steveo There were a few more questions, I’l start with those
- Do you need unattended image deployment, or do you require an IT tech to sit in front of the system for image deployment.
- How do you invision deploying images? (i.e. someone sitting at HQ deploying images globally, or the site IT techs managing the process?)
- In regards to imaging target computers. Is you plan to have one image for all, or one image per model?
OK, now hold on we are heading into the details.
First let me say I’m a big supporter of FOG. We have it deployed in our organization and it works very well. BUT (IMO) FOG (in its current state) is geared more towards the SMB market than the enterprise (where I would class you based on the size if your projected deployment). Will fog work for you? Yes, as long as you understand the caveats.
Your 128KB links are a concern for me for few reasons.
The first observation is the storage node’s dependency on the database running on the master node. Storage nodes must have 100% access to the database on the master node, or the storage node will not function. We have not tested if there will be any impact on imaging due to communication latency between the Master Node and the Storage Nodes.
In you install the FOG client on the target computers (not mandatory for imaging with FOG). The FOG client checks in with the master node on a set interval, that interval is 5 minutes by default. With a large campus you might want to increase this check in time to 15 minutes to spread the load out a little. This check-in consumes CPU on the master FOG server as well as network bandwidth. With FOG’s distributed imaging (master node, storage node) all files needed for imaging will be delivered locally, but the FOG clients still communicate over the WAN back to the master node.
Multicasting only functions from the Master Node. If you need multicast imaging then storage nodes won’t work for you.
Officially you can only capture images on the Master Node, storage nodes are not intended to capture images. With that said, there is a certain configuration you can use to capture images locally.
FOG’s replication is one way Master Node to Storage Nodes. Images captured at the Storage Node level (with the specific configuration) will not be replicated back to the master node.
FOG currently doesn’t have the ability to create storage nodes in an unattended manner. Plus to install FOG, the master nodes and storage nodes need to have internet access.
I have to hop off and do some other things, I’m not done here. There IS a path forward with FOG. I just want to document the difficulties first then we can work towards a solution.
Wow, Thank you for your time George. I do Appreciate it. :)
Will you deploy software with FOG or some other technology?
We currently use other deployment tools and will continue to do so. I wont mention the details of the tools, but it would be nice to use FOG snap-ins to maintain the baseline images. This is in part to ensure minimal critical vulnerabilities exist within the environment after imaging and will minimize image updates that will need to sync across the WAN. The next question’s answer will explain the reason for this.
What is the smallest network link (in bandwidth for a remote site)?
3 sites on Diginet - 128kbps
A few (20 or so) on 2 Mbps.
The average is 4 Mbps and the biggest is around the 10 Mbps range.
I realize this is not ideal bandwidth, and I am hoping this can work if we limit image updates to at most twice a year. Even that means trickling an image for 2 weeks or more is needed. (possible with FOG?) The 3 slow sites are almost impossible I would imagine, we regularly send a tech with replacement computers. Ill gladly rather send an updated replacement Storage node to image the current machines.
Do you need to multicast images at the remote sites?
Multicasting would not be a requirement. I am not seeing the advantage as I am confident that we have 12 hour windows to complete a site.
I don’t mind the questions, ask away, I do understand the need for these questions.
@steveo Sorry I thought about a few more questions on the commute into the office.
- Will you deploy software with FOG or some other technology?
- What is the smallest network link (in bandwidth for a remote site)?
- Do you need to multicast images at the remote sites?
Stick with me, because they are specific leading questions…
Do you plan on using the fog client on your target systems? - YES
Do the remote sites have 100% full time access to your HQ? - YES
What hardware will you use at each location for your storage node? - HP 6000 UNITS. Old stock lying around, We had to cut budget and new HP Microservers were not possible without cutting into another project or two. (We realize and accept the risk of the older hardware)
Do you need 100% coverage for unattended deployment? (i.e. can you function without boot through iPXE) - Need 100% coverage. Currently not using PXE for systems to function.
Out of the 4500 systems, how many different models do you have? Currently 9 Different Models, all HP - 6000,6200,6300,8000,8200,8300,600 G1 SFF Units 600 G2 and G3. (FOG is going to be crucial towards replacing this ageing fleet)
Will your images only be created at your HQ and deployed everywhere? YES (WAN Consideration here)
Will you need to capture images at the remote locations? NO (The odd possibility would be a nice to have)
I do want to think about this for a bit. But lets collect a bit more information.
- Do you plan on using the fog client on your target systems?
- Do the remote sites have 100% full time access to your HQ?
- What hardware will you use at each location for your storage node?
- Do you need 100% coverage for unattended deployment? (i.e. can you funciton without boot through iPXE)
- Out of the 4500 systems, how many different models do you have?
- Will your images only be created at your HQ and deployed everywhere?
- Will you need to capture images at the remote locations?
There are more questions, but lets start with those.