Remote Imaging -- Will FOG Work?
-
[quote=“Alex Elkins, post: 42373, member: 28621”]Okay, I understand your suggestion. Wouldn’t moving the images from the master to the storage node still use up the same amount of bandwidth, but only one time (or however many times you push a new image to the storage node)?
E: Site-to-site VPN or what?[/quote]
I was figuring that since he must go to each site anyways to set up the storage nodes, might as well take a copy of the needed images with him on a flash drive, and copy them where they should be, and make the necessary changes for it to work.
Also, if a transfer from master to storage node is necessary, this can be scheduled during off-peak hours… like over the weekend, overnight on Sunday night… whatever works best. This still has tremendous benefit.
-
there are ways you could make fog work for your situation, but it would take some work. I’ve done something along these lines as an experiment on a feature in development, but you honestly might want to look at different solutions unless you’re convinced fog is the way to go.
-
[quote=“Junkhacker, post: 42375, member: 21583”]there are ways you could make fog work for your situation, but it would take some work. I’ve done something along these lines as an experiment on a feature in development, but you honestly might want to look at different solutions unless you’re convinced fog is the way to go.[/quote]
He hit on the “Cost” factor pretty hard…
-
Don’t forget, what you’re talking about is completely capable (multiple nodes at separate locations, but managed by a central “server”) is 100% possible using location plugin. This is EXACTLY what it was designed for.
-
This post is deleted! -
yes
-
[quote=“Wayne Workman, post: 42363, member: 28155”]I’m going to take a stab at this question. I’ve been using FOG for about 2 months now, it’s working great.
So, you don’t want a storage node at each location because of cost.
My solution to that is to just put a storage node at each location.It doesn’t need to be anything fancy. An old dual-core with a gigabit interface will get the job done just fine. You could even go with an old P4 with Hyper Threading. It’d be slower, but still.[/quote]
It is also possible to install fog on a [URL=‘http://www.raspberrypi.org/’]Raspberry Pi[/URL]. This could be a cheap solution for you. As of now we have only tested and checked that it has installed correctly on a Pi. But the investment is around $50 a Pi and could possibly be a solution for you. The Pi can sit right next to the router and plugin to the same power strip.
Of course we would be willing to help you at every turn.
-
[quote=“Wolfbane8653, post: 42382, member: 3362”]It is also possible to install fog on a [URL=‘http://www.raspberrypi.org/’]Raspberry Pi[/URL]. This could be a cheap solution for you. As of now we have only tested and checked that it has installed correctly on a Pi. But the investment is around $50 a Pi and could possibly be a solution for you. The Pi can sit right next to the router and plugin to the same power strip.
Of course we would be willing to help you at every turn.[/quote]
Now that’s a solution!
I vote for this one.
Clearly, you should just buy 1 at first, and we can get that working and go from there.
-
as much as i like playing with Raspberry Pi’s, i don’t think they’re a good solution to this problem. a refurbished desktop can be had for a little more then the cost of the Pi, and it will have gigabit Ethernet and can take a real hard drive.
-
You could also look into the Banana Pi (has a GIG network on it)
-
And there’s still the “connection” problem. Either you use dyn-DNS and port forwarding (all unencrypted) to connect the FOG servers located at different sites or you go the “long way” of setting up vpn gateways everywhere…
-
This post is deleted! -
Hi, I’ve had this working since 2012. Basically I have 3 Fogservers, the master where I am based, and 2 others, one about 1/2 hour away and the other an hour away. The 3 sites are connected with IPSec tunnel.
I upload an image to the local FOgserver and it replicates slowly across the tunnel to the remote sites. Once transferred, I can \ then create an image task for a remote PC. From there I can run a script on the remote PC which forces it to PXE boot and it gets reimaged.
Maks a couple of customisations and its good to go. HOWEVER, since FOG 1.2 the transfers across the IPSec tunnel have been slower and occasionally stop, which is very frustrating especially since the transfers take over a day to complete! -
How are replication tasks working? Nothing changed directly in how things transfer. The speed or lack of it sounds environmental more than something fog is doing or did.