FOG : Main sites and Branches organisation
-
@processor If you have a full fog server at each location then you will have to manage each fog server independently. You can use the ldap plugin to avoid having to create local users on each fog server. For deploying images you will have to go to the fog server located at each location.
You can create a storage group where the FOG server at HQ is the master node and each of the fog servers at the remote locations are “listed” as storage nodes in this storage group. If setup this way then FOG will replicate images from the master node to the full fog servers at each site. The only manual actions will be to export the image definitions from the HQ fog server and then import them on each site’s FOG server. This is quick and easy to do via the web ui. I wish it was a bit more automatic, but this configuration is officially unsupported, but it works.
Having a full fog server at each location does have its advantages too. You can then capture and deploy at each location. You can also multicast at each location since multicasting can only be done via a full/normal fog server. You will be able to deploy even if your WAN is down.
-
@george1421
Hi,
This seems to be the best for our case. I did what you suggested but something is not working as expected.
I configured a new node on default storage group on main server.This how configured it :
- ip : ip of the branch server
- image path : /mnt/FOG same as it is configured on the branch server
- ftp path : same
- interface : ens160 (outgoing interface of the main server)
- management username : fogproject
- management password : same as it is configured on branch server.
All other settings left as default.
On the branch server default storage node : master is now unchecked.
I can see the new storage on main server dashboard with space available.
But the replication is not working as expected this is what I can see in the logs :
[06-18-19 9:09:06 pm] # W10_Remote-20190523-HS-745-755: File does not exist d1.mbr (FR3-FOG-01) [06-18-19 9:09:06 pm] # W10_Remote-20190523-HS-745-755: File does not exist d1.minimum.partitions (FR3-FOG-01) [06-18-19 9:09:06 pm] # W10_Remote-20190523-HS-745-755: File does not exist d1.original.fstypes (FR3-FOG-01) [06-18-19 9:09:06 pm] # W10_Remote-20190523-HS-745-755: File does not exist d1.original.swapuuids (FR3-FOG-01) [06-18-19 9:09:06 pm] # W10_Remote-20190523-HS-745-755: File does not exist d1.partitions (FR3-FOG-01) [06-18-19 9:09:06 pm] # W10_Remote-20190523-HS-745-755: File does not exist d1p1.img (FR3-FOG-01) [06-18-19 9:09:06 pm] # W10_Remote-20190523-HS-745-755: File does not exist d1p2.img (FR3-FOG-01) [06-18-19 9:09:06 pm] | CMD: lftp -e 'set xfer:log 1; set xfer:log-file /opt/fog/log/fogreplicator.W10_Remote-20190523-HS-745-755.transfer.FR3-FOG-01.log;set ftp:list-options -a;set net:max-retries 10;set net:timeout 30; mirror -c --parallel=20 -R --ignore-time -vvv --exclude ".srvprivate" "/mnt/linux_iSCSI/FOG/W10_Remote-20190523" "/mnt/FOG/W10_Remote-20190523"; exit' -u fogproject,[Protected] 10.69.0.11 [06-18-19 9:09:06 pm] | Started sync for Image W10_Remote-20190523-HS-745-755 - Resource id #93271 [06-18-19 9:09:06 pm] * Found Image to transfer to 1 node
Any idea of what is going wrong?
-
Forget it was a right issue on branch server.
-
I read that putting a node as master with empty storage could wipe all nodes in this storage group. does this mean that a new capture should only be done on master storage node otherwise it will be erased on the node where it has been created?
-
@processor Warning I’m not 100% sure on this answer, but the master node will only react upon images it knows about. So on your remote fog server (configured as a storage node) you should be able to make additional images on the remote fog server without the master node knowing or erasing them. The fog replicator only reacts to the image definitions stored in the fog database on the master node (as long as you don’t have the same named images captured on the remote fog server as the master fog server).
-
I’ve though about this a little, for testing purposes we can do this.
- On the HQ fog server, using a linux command prompt key in
sudo mkdir /images/tom sudo touch /images/tom/sample.txt
- On the remote fog server, using a linux command prompt key in
sudo mkdir /images/sam sudo touch /images/sam/sample.txt
- On the HQ fog server create a image name of
test
using the web ui. - Back on the HQ fog server’s linux console key in
sudo touch /images/test/sample.txt
- Now let the replicator run.
A successful test will have on the HQ fog server in the /images directory
/images/tom /images/test
On the remote FOG server in the /images directory you should have
/images/sam /images/test
That will tell us
- The replicator only acts upon files where it has an image definition in the database.
- It will not step on images you captured on your remote server that are not in the HQ fog servers database.
-
I have only one capture on the branch with which we are doing tests, we will see if the image is deleted or not.
Do you know if it’s possible to schedule replications at specific times? because we have about 2.5To to sync and I would like to do it on night shifts. -
@processor said in FOG : Main sites and Branches organisation:
Do you know if it’s possible to schedule replications at specific times?
Yes but its not native to FOG. In short you will use cron to stop and start the fog replicator service when you want replicated to run. I think I have a tutorial out there on how to do that. Let me check.
-
Well I was close, it was for setting up dynamic transfer rates. But the concept is almost the same. In your case you will want to issue the command to stop and start the fog image replicator.
https://forums.fogproject.org/topic/9449/dynamic-fog-replicator-transfer-rates
-
Thanks, I’ll give a look at it this weekend.