[Requested] Multi-Site Location Patch [Requested]
-
[SIZE=3][B]Please be aware this is patched for 0.31[/B][/SIZE]
-
Please note that Line 34 of service/Pre_stage1.php should read:
[INDENT=1][PHP]if ( $location != null || $location != -1 )[/PHP][/INDENT]
It currently reads
[INDENT=1][PHP]if ( $location == null || $location == -1 )[/PHP][/INDENT]
This is causing the execution to stop during the location check and say that there is no location set for the PC.
[edit] only while a location is actually set. I didn’t try it without a location set.Stephen.
-
Brilliant. Thanks for this. Just out of interest I have been having issues installing a storage node. [I have posted a support question but it has largely been ignored. I can can’t install FOG storage nodes on Ubuntu 10.04 [32 or 64 bit]. Can anybody confirm that it works/doesn’t work and suggest a work around?
-
[I]Who has the most sites here? anyone managed more than 5 FOG Sites? how do you do it?[/I]
How could we make FOG a real Multi Site solution
Would there be a way to implement a sweet Multi Site setup with DNS with site specific A Records
As what happens when a particular Laptop roams between sites, would it not try and connect to an incorrect storage node?
Could something like this work
OPTION (1)
[QUOTE]
SITE 1 (MAIN SITE)[INDENT=1]FOG-SERVER-1-1[/INDENT]
[INDENT=1]FOG-NODE-1-1[/INDENT]
[INDENT=1]FOG-NODE-1-2[/INDENT]SITE 2
[INDENT=1]FOG-SERVER-2-1 (PXE Only? or MySQL Replication?)[/INDENT]
[INDENT=1]FOG-NODE-2-1[/INDENT]
[INDENT=1] [/INDENT]
SITE 99[INDENT=1]FOG-SERVER-99-1 (PXE Only? or MySQL Replication?)[/INDENT]
[INDENT=1]FOG-NODE-99-1[/INDENT]Then using DNS for each site needs some common A/CNAME records
[INDENT=1]FOGSERVER - Unique for each site[/INDENT]
[INDENT=1]FOGSTORAGENODE-1 - Unique for each site[/INDENT]
[INDENT=1]FOGSTORAGENODE-2 - Unique for each site[/INDENT]
[/QUOTE]OPTION (2)
[QUOTE]
SITE 1 (MAIN SITE)[INDENT=1]FOG-SERVER-1-1[/INDENT]
[INDENT=1]FOG-NODE-1-1[/INDENT]
[INDENT=1]FOG-NODE-1-2[/INDENT]SITE 2
[INDENT=1]FOG-NODE-2-1 (Also has PXE and using RSYNC for tftpboot)[/INDENT]
[INDENT=1] [/INDENT]
SITE 99[INDENT=1]FOG-NODE-99-1 (Also has PXE and using RSYNC for tftpboot)[/INDENT]
Then using DNS for each site needs some common A/CNAME records
[INDENT=1]FOGSERVER - Main Site[/INDENT]
[INDENT=1]FOGSTORAGENODE-1 - Unique for each site[/INDENT]
[INDENT=1]FOGSTORAGENODE-2 - Unique for each site[/INDENT]
[/QUOTE]Then no matter which site you are at, you always know it would use the correct Storage Node and Server
ISSUES
Option (1)
[LIST]
[]Has Multiple Databases, how could you keep the databases in sync?
[LIST]
[]having local database would surely improve performance vs slow WAN link
[]Would still operate If link between sites is down (just no sync)
[]Assuming the sites are connected via some form or WAN/LAN/MPLS/VPN etc then some replication is necessary for the Databse
[]MySQL Database Replication? keeping all sites in sync so the agent can contact the server and the relevant jobs are there ready meaning you only need to manage ONE single FOG Web Interface for all sites
[/LIST]
[][INDENT=1]RSYNC (or DFS) can be used for the Storage Images and SNAP-INs, and tftpboot folder[/INDENT]
[/LIST]
Option (2)[LIST]
[]One single database, administration is easier, less to go wrong with database?
[]slower performance as agents are talking across WAN links to central site?
[LIST]
[]how much data would (1 or 10 or 500) agents use across a WAN link per day chattering?
[]SCCM here bogs down an entire network, what about FOG?
[*]can snap-ins be downloaded from Storage Nodes instead of server
[/LIST]
[/LIST]
Option (3)[LIST]
[]Re-think the entire approach
[LIST]
[]Use the FOG Server as an SQL Database and Web Interface only no storage
[]Use storage nodes for Images, Snap-Ins, PXE Boot and sync them
[/LIST]
[]This way would mean each brach site has a storage node which performs storage of images, snap-ins, tftpboot folder which all syncs using rsync with other storage nodes (controlled by server?)
[*]the FOG administrator administers one database uploads snapins and images to one and its replicated
[/LIST]Thoughts?
-
Option 3 is how i have our infastructure setup. we have 55 sites, alot of those are branch sites and only have VPN links, We have the FOG Server on a VM on our server scope, a Master Node in the “Build Area”, which has it’s own 1GB switch/scope so uploading and downloading images,snapins etc is quick, and another Master Node at our failover site and rsync and crontab syncs down the changes to the relevant sites, dependant how they’re routed (Main Office - Branch Site). Works extremely quick, Build Area Gets 4-5GB/Min Deployment and the others are 1.4-1.5GB/Min… All data is transferring on site, Images, snap-ins, tftpboot etc… the only traffic going across the VPN is reporting to and from to the FOG Server to kick off the image etc…
-
oh and image replication is set only to run between 6pm - 6am snapin changes are set to sync every hour, the first sync is quite heavy, but that’s done on the “Build Area” when the storage node is being setup and the FOG Server pushes our a script that looks for the installers which are copied from the relevant node just after the image is deployed before it boots up to sysprep. so the installers are running locally to the machines also.
-
This post is deleted! -
the roaming issue has never been a problem of ours because we use FOG purely for image deployment. Software rollouts, Windows Updates, Printers etc is handled by other products, so unless the machine needs to be re-imaged this issue doesn’t crop up an we are in the habbit of confirming location before kicking off the image.
-
how have you disabled the Primary server as a storage node so nothing is deployed from it?
wouldn’t option 3 be as easy as having a hardcoded “fogstoragenode” DNS A record entry for primary deployment and that’s your multi site done
as in primary site has the “fogserver” which is what the agents talk to to look for queued jobs, send back their login info etc. but when it comes to actual deployment of either image or snap-in they automatically look to the “fogstoragenode” dns entry instead of the “fogserver” dns entry
each branch site has a “fogstoragenode” dns A record so the agents on that site talk to the correct one
just thinking of ways to make it easier for any roaming users, so they always download from the “local” storagenode for Images and Snap-Ins
we have a lot of roaming users, so would be good to ignore what site they are at and make the whole process automatic.
-
[quote=“Devlin7, post: 2571, member: 612”]Brilliant. Thanks for this. Just out of interest I have been having issues installing a storage node. [I have posted a support question but it has largely been ignored. I can can’t install FOG storage nodes on Ubuntu 10.04 [32 or 64 bit]. Can anybody confirm that it works/doesn’t work and suggest a work around?[/quote]
I don’t know about Ubuntu 10.04 but I have gotten a working storage node up and running on Ubuntu 11.10.
The main thing I was missing the first couple times I tried is that the username and password that the FOG setup spits out for the storage node needs to be entered into the FOG Managment console while creating the Storage Node Object. -
Here’s what I’m doing for Portable Storage Nodes.
[INDENT=1]Install a standard storage node on the laptop. (assume you set the ip address as 192.168.1.2 during setup)[/INDENT]
[INDENT=1]Edit the /var/www/fog/commons/config.php[/INDENT]
[INDENT=1]Replace all “192.168.1.2” entries with getenv(“SERVER_ADDR”)[/INDENT]
[INDENT=1]Register a static DHCP Reservation for each site the laptop will be at.[/INDENT]
[INDENT=1]Enter a storage node entry in the managment console for each location that the laptop will be at using the static reservation set.[/INDENT]This basically tricks the storage node to use it’s current ip address from the config.php rather than a static IP address. The storage node entries can then be set different locations and the laptop server will only be “on location” when it gets the proper IP address for the subnet it’s on.
I’ve not tested this yet but there is no reason it shouldn’t work.
You can then “Disable” the Primary Storage Node by simply not assigning it a location. If you have not installed the Multi-Location PXE server and group all nodes in the same Storage Group you should be able to even have images automatically push down to the laptop storage nodes
Something to think about.
Stephen.
-
Hey Lee - glad to see you are still active. Had to rebuild my setup her in Montana as my ESxi host ran into a disk controller problem and we weren’t able to salvage the drives…of course we didn’t have a backup…luckily though because of my year old install of this patch we had copies of all of the images on the 2 remote nodes.
Question is this, when you say “[B][SIZE=3]patched for 0.31” [/SIZE][/B]it sounds safe to assume that we don’t yet want to update to 0.32?
Stephen your Portable storage node sounds like a cool idea…we only use fog a couple of times a year so the effort of building a Fog storage node (even on a VM) at our remote sites seems like extra effort. May have to try it out.
Thanks, Pat -
Hi Pat,
yes the files are based around on the 0.31 code, i haven’t yet changed it to 0.32 because there were so many changes being made, it seemed sensible to wait for 0.33 to be released and i will then re-code the location patch etc for 0.33.
@Syluspilot
DNS A record would only work if each site had it’s own local DNS wouldn’t it? however atleast 75% of our sites use the DNS from our main office (or atleast the DNS records are replicated). but i do like the concept of your approach
-
Lee,
How did you set the replication to happen between 6am and 6pm?
-
turn off FOGImageReplicator Service manually (/etc/init.d/FOGImageReplicator stop)
and then set a crontab to run the command to enable it at 6pm and then use crontab again to turn it off at 6am…sudo -s
crontab -e
add this at the bottom of the file:
0 18 * * * /etc/init.d/FOGImageReplicator start
0 6 * * * /etc/init.d/FOGImageReplicator stop -
Lee,
I got some help from a co-worker, and basically set it up as a cron job that runs @ 21:00 daily. We tested to make sure that the process ended, by commenting out the sleep line. But am still curious as to how you set it up.
-
[quote=“Lee Rowlett, post: 3259, member: 28”]turn off FOGImageReplicator Service manually (/etc/init.d/FOGImageReplicator stop)
and then set a crontab to run the command to enable it at 6pm and then use crontab again to turn it off at 6am…sudo -s
crontab -e
add this at the bottom of the file:
0 18 * * * /etc/init.d/FOGImageReplicator start
0 6 * * * /etc/init.d/FOGImageReplicator stop[/quote]Thanks, I didn’t see your response before I left the other comment. Since no one should be uploading images after 6, I don’t need the process to run in a loop, and everything should be uploaded by 9.
Thank you again for your quick answer.
-
Hey all,
Just wanted to update you. The portable FOG storage nodes work well. They have been tested at 3 remote sites imaging concurrently. All went relatively well. No major hitches.
My next issue is the [URL=‘http://fogproject.org/forum/threads/fog-multi-site-multicast.796/’]Multicast from storage nodes[/URL] for which I have opened a new thread.
Stephen.
-
Thanks for the update Stephen…interested to try it out.
Wondering where to start looking for my issue with this.
Right now I’m working at one of the “remote locations” (it’s summer vacation for the schools). I’ve uploaded a new image from a machine, but what I am finding is that the Upload process is pushing the image files over to the “main location”, however when I try to pull it down to another client with a “deploy”, the target client is looking for it here in the remote location.
In both cases, the hosts are configured to be at the “remote location”, but the image upload is not saving to the remote storage node. I can copy it easy enough, but I would like to save as much time as possible.
Is this by design? Or did I miss a step?
Thanks, Pat
-
[quote=“PatinMT, post: 3976, member: 913”]Thanks for the update Stephen…interested to try it out.
Wondering where to start looking for my issue with this.
Right now I’m working at one of the “remote locations” (it’s summer vacation for the schools). I’ve uploaded a new image from a machine, but what I am finding is that the Upload process is pushing the image files over to the “main location”, however when I try to pull it down to another client with a “deploy”, the target client is looking for it here in the remote location.
In both cases, the hosts are configured to be at the “remote location”, but the image upload is not saving to the remote storage node. I can copy it easy enough, but I would like to save as much time as possible.
Is this by design? Or did I miss a step?
Thanks, Pat[/quote]
Hey Pat,
Because of how the Location patch is coded uploaded images get pushed back to the main storage node. This is because of the Image Replicator service.
As far as I can tell* an image that is added to a secondary storage node is not uploaded back to the master storage node. Transfers are downwards only. So the image needs to be uploaded to the master storage node so that it can then be redistributed throughout the storage group.
[INDENT=1]* I only took a quick look at that part of the code so I may be misinterpreting this.[/INDENT]You can find the code in /var/www/fog/service/Pre_Stage1.php for this function.
Stephen.