Remote Storage Server
-
Hey guys, I am interested in possibly building out Remote Storage servers in excess of 72 Locations, spanning about 3700 machines.
Each location is a different AD (location plugin going to be used).
Each location would not have a VPN.In this scenario as i describe each location will be running hardware that we have build up as Remote Backup Servers with HyperV installed. I would like to inject a FOG Storage Server with very little configuration.
I have been using Fog off an on for a while now but not without a VPN connection.
What are the thoughts of just opening Replication Ports (FTP), and MySQL? (Obviously with IP based access control.)Would it be best to have Storage nodes have some kind of Auto VPN connection (OpenVPN?)
I dont think I have found any Remote Storage Node install document… based on recommended options. Perhaps we can develop one? Assuming that we are not just working with a single organization and supporting multiple environments.
If possible what ports are absolutely needed for Firewall rules? FTP and MYSQL?
Does anyone have an auto VPN script on startup? -
@sourceminer Well this is an interesting project.
The traditional FOG and storage node setup requires a full time network connection to function. The storage nodes are similar to a traditional FOG master node except the storage nodes don’t have a database locally, they rely on a connection back to the master node to get their instructions and commands.
As for opening up network ports across the internet, I would be concerned about that because FOG 1.3.x doesn’t encrypt its network traffic so all of the communications between the FOG server and storage nodes would be unprotected if not inside a site to site VPN.
So with your current networking setup you don’t have any preexisting site to site VPN/Networking solution in place today?
What is your goal of having a single fog server at HQ and storage nodes at each site vs just putting a traditional FOG server at each site right now?
Without a persistent site to site connection the FOG clients (if used) at the remote sites won’t be able to check in with the master fog server for instructions or snapin deployments.
The developers had a feature request to enable sftp for master node to storage node replication. This solution would provide an encrypted replication channel between fog servers and storage node. But that option hasn’t been fully implemented just yet.
Right now there are many open questions.
-
For the simpler questions.
- You can run FOG in HyperV no problem.
- Storage Servers already have a minimal configuration - but you’d wanna install the OS without a GUI. Without a GUI, you can run the OS on 512 MB of RAM and 1 core, though I’d recommend 1GB of RAM and 2 cores.
- To install a Storage Node, use the FOG Installer as usual, but chose
S
for installation type. S for Storage Node. You’ll be asked about the main server’s IP, MySQL credentials, etc. It’s easy, the installer walks you through it - it even will add itself as a storage node in the main server - if network is setup and the information you entered is correct. - I wrote the OpenVPNRouter project - which is fully automatic & self healing OpenVPN connection - for routers though. It shouldn’t be hard to write something without routing.
Here’s some reading you would be interested in:
https://wiki.fogproject.org/wiki/index.php?title=Location_Plugin
https://wiki.fogproject.org/wiki/index.php?title=Replication -
@george1421 said in Remote Storage Server:
The developers had a feature request to enable sftp for master node to storage node replication. This solution would provide an encrypted replication channel between fog servers and storage node. But that option hasn’t been fully implemented just yet.
SFTP would be simpler than automating OpenVPN and would secure image replication. However, think of all the MySQL queries and other stuff that goes across the network plain-text.
VPN is the only way to secure all that stuff at once.
-
Thanks for the responses guys.
So yes I know about the installer and selecting S. The reason for keeping it Central is because we do Central IT Services for several customers. Having a Centrally managed Deployment solution is better for standards sake. We have also used a solution called Persystent by Utopic, however seems we find more bugs in it than the 10’s of thousands of deployments they have So I have looked into doing a hybrid model Fog and Persystent.
The Goal to have a <12 Min resolution time on the help desk. With Tools like Fog and Persystent the goal can become reality, if kinks can be handled accordingly.
-
@Wayne-Workman With this OpenVPN Router what would you think of running this on the FOG server itself?
Of which this would / could just allow the remote machine and the Fog Server to communicate with one another only.
-
@sourceminer said in Remote Storage Server:
With this OpenVPN Router what would you think of running this on the FOG server itself?
My router project isn’t designed for what you’re thinking exactly. My project reliably routes my home traffic transparently straight to PIA - this secures all my internet traffic from being analyzed or monitored by my ISP. But it’s not ready for production like what you need and lacks features & support & community.
'What you would need is a setup where the client is the endpoint only, and not a router. Also, you would need to work out routes so that IP / port destined to the main fog server would go through the VPN tunnel - and not the internet. This is simple enough, lots of stuff on the internet about a single NIC system routing properly between a tunnel and the open internet:
https://openvpn.net/archive/openvpn-users/2003-07/msg00032.html
https://wiki.debian.org/OpenVPN
https://www.digitalocean.com/community/tutorials/how-to-setup-and-configure-an-openvpn-server-on-centos-7The main fog server would need to run a server configuration of OpenVPN - server in the sense that many clients establish VPN tunnels with it. There would need to be a subnet just for this - like a 255.255.255.0 class C subnet.
On the headquarters side you would need a pretty beefy box - because there is encryption overhead with being the server side of OpenVPN. A lot of chatter would be going over the VPN if you’re using the FOG Client.
I think this is doable.
-
I think I can modify the FOG Installer to setup OpenVPN back to a main server - and modify the Main server installer to setup an OpenVPN server.
FOG main server already has SSL certs present.
@Tom-Elliott I’m going to try to do this.
-
Looking more into this - it’s very doable. I’m going to mess around for a while and get a P.O.C. going between two remote FOG Servers.
-
@Wayne-Workman This would be HUGE!! Perfect. Then its all just tied in and working with a simple install.
-
I think we need to step back and do a sanity check on what you really need here.
I understand the whole centrally manged (MSP) perspective.
I think you need to document what exactly you hope to achieve from this solution. I am intimately familiar with Openvpn, FOG, and the concepts involved here.
Unless you are going to establish a full time vpn connection to each of these remote locations you would be better served if you had a full fog server at each location. By placing a full fog server at each location you can take advantage of all of fog’s capabilities (more than just pushing images to client computers). The fog client must be able to reach the master FOG server to check in, change system names, reboot on demand, and receive instructions from the FOG server. Storage nodes can’t do this today. If they could they would need a live connection back to the FOG server because they don’t have a local database installed, only a full fog server has a database.
So then I have to ask the question, do you / will you create master OS images at your HQ for these remote locations? If so you will need to have vpn setup so that the fog replicator can do it replication. This replicator runs continuously, so you may need the vpn established continuously. Its possible to have a cron job open the vpn tunnel and then start the replicator for after hours image replication. You may be better served with just moving the files to the remote location on flash drives and ignoring the bits around replication. How often will your images change really? Is it worth the effort?
So what it really comes down to (if there is a fog server at each location) you would just need remote access to the fog web gui at the remote location to issue capture and deploy commands to the remote clients.
I can’t say what the right answer is for your final solution. I’m not saying yes or no here either. The only point I’m raising here is to think about what you really want to accomplish here, how much effort its going to take, and how long will it take to pay back the effort put into the solution.
Can you do this with FOG and openvpn, probably. If you go this route you will want to run the openvpn software on all fog servers/storage nodes and then have the clients setup the portforwarding on their internet routers to forward the openvpn port number direct to your FOG hardware at their location. Since you will not be routing traffic beyond the openvpn software at each end you should not have to worry about (near and far end) IP range address conflicts with the openvpn.
-
George has legitimate points - you really should consider them. One single 20GB image - which is a very lean image - could take days to replicate over a slow link. I was thinking you already thought about this before you posted here - but you should think through it really.
I’m working on the OpenVPN stuff anyways because I think it would be a nice add-on piece to FOG - because communications between storage nodes and the main server are currently totally unsecured. We really need something here for those that want it - and that something would be awesome if Tom (senior dev) didn’t have to re-code a ton of stuff. Having an Add-On that configures OpenVPN would solve this.
-
@george1421 Totally agree with taking a step back. To answer your questions.
We do at present create master images for our clients in our Move Add Change Room.
Machines get delivered to our clients then imaged. We have an entire department dedicated to just maintaining images and update software updates. So syncing down the the client is the preferred method.Correct on the statement about just needing Port 443 access to FOG UI.
As it relates to the community, the benefit can be 2 fold… First no dependency on lan communication (Pre Existing VPN Tunnels or MPLS Links) The entire solution becomes more secure as communication happens though an SSL tunnel. Allows for disconnected branches and minimal configuration to get an entire fog solution up and running. Imagine From download to install only taking an hour with multiple locations configured.
-
I’m nowhere close to done - I’ve barely even started really. But I’m pushing stuff to here: https://github.com/wayneworkman/fog-community-scripts/tree/master/FogOpenVPN
-
Just curious where this has gone… I keep going back to the idea of using Fog but without some sort of replication over a secure channel (non managed VPN’s) but Auto OpenVPN connected encrypted channels. Its not a real viable solution IMO for MSP’s.
-
I’ve not done any more work on this. The best solution is still to have permanent tunnels from site to site via a routing appliance like a Cisco router or checkpoint router or PFSense router or some other router solution.