Ah, figured it out. It was a routing issue between the 2 servers.
It may be worth providing a user friendly error in the log if the master can’t connect with the storage node.
Ah, I was unaware of plugins!
I have now got my setup how I want it, as follows:
Master server at the central site (10.5.140.0/22 range), and a storage server (with TFTP also) at the other site (10.5.144.0/22 range).
We have just been imaging a number of machines and whilst the majority did what they should and grabbed the data from the second site, some of them didn’t. They grabbed the data from the central site. Not a huge problem as we have a relatively good connection between the sites, but it slows the process down significantly.
Can anyone advise how I can stop this happening?
Absolutely. Our 2 servers are in different schools, with an IPSec VPN in between. I am currently building a new network consisting of a single domain across all schools, but have to maintain the existing multi-domain setup.
On my firewall, I hadn’t allowed each side to use the other side from my new IP address range. Allowing this on the firewall resolved the issue.
Simply put - the 2 servers were unable to route to each other - there was no route between them.
I would say a good solution might be for the replication services to give an error stating that the server can’t be contacted though.
I have installed a storage node following the instructions on the wiki. All looks fine as far as I can see, but no sync is happening. I am receiving the following error in both the Image Replicator and Snapin Replicator logs:
[02-07-17 10:49:37 am] * Type: 8, File: /var/www/html/fog/lib/fog/fogbase.class.php, Line: 841, Message: Undefined index: storagegroupID, Host: 10.5.147.249, Username: fog
I have double checked the storage node settings on the master server, including ensuring the password is correct. What am I missing?
I’m still a little bewildered by the idea that 2+GB of data is large to be honest.
With 10/40Gbit core switches/server connectivity and gigabit to the desktop, 2GB of data is tiny, even if deploying to hundreds of machines. 2GB over a 1Gbit connection would take 17 seconds to copy (obviously, theoretical, as reality will change that dramatically depending on disk speeds, network congestion etc…).
Whichever way you do the install, that data is still going to go across the network - be it by running from a share or by copying and running it locally on the machine.
@Tom-Elliott Licensing in schools is done by site license usually, meaning each school has its own key for things. There are then limits on how many installs of various packages can be performed. So, yes, licensing is a major reason why we can’t do this.
We also don’t really want to put everything on every computer that would be absurd, especially when we have plenty of computers with 128GB SSDs.
All of this is somewhat beside the point though - we want to do things in a certain way, so need snapins to be able to handle it.
Lets put it this way. I am now running IT for 6 schools and 3 nurseries. Each of those schools has a dozen PC types, with about 3 different roles for each. This number is likely to grow as more schools join our trust also.
So, to create a full image type for each machine with each role would be rather a lot of unnecessary work.
Instead, a single general image (containing the base level of software), combined with Snapins will reduce that work tremendously, along with the storage needs of images.