FOG : Main sites and Branches organisation
-
@processor the first 2 can be done by just setting up storage nodes at each site. the 3rd one, makes the whole thing complicated though.
-
I don’t see a question in your post but I can give you a few comments.
- Use the location plugin to assign storage nodes and client computers to locations. That way you can deploy to computers at the same site.
- You can only capture images to master nodes (i.e. real fog servers not storage nodes) in each storage group. So you will be able to capture and deploy images at HQ, but deploy only at the branches.
- The storage nodes don’t have a web ui or local database, they are dependent on the master node in the storage group to operate. If the master node is at HQ, they you must have a continuous link between the branches and HQ to deploy images.
-
Thank you both for your answers.
Sorry if it’s a silly question but would it be possible to configure a normal server as a storage node?
If yes is there any advantage in doing this?
Or : having a normal server on each site, syncing them out of FOG services and export/import images using web ui is the best solution for us? -
@processor If you have a full fog server at each location then you will have to manage each fog server independently. You can use the ldap plugin to avoid having to create local users on each fog server. For deploying images you will have to go to the fog server located at each location.
You can create a storage group where the FOG server at HQ is the master node and each of the fog servers at the remote locations are “listed” as storage nodes in this storage group. If setup this way then FOG will replicate images from the master node to the full fog servers at each site. The only manual actions will be to export the image definitions from the HQ fog server and then import them on each site’s FOG server. This is quick and easy to do via the web ui. I wish it was a bit more automatic, but this configuration is officially unsupported, but it works.
Having a full fog server at each location does have its advantages too. You can then capture and deploy at each location. You can also multicast at each location since multicasting can only be done via a full/normal fog server. You will be able to deploy even if your WAN is down.
-
@george1421
Hi,
This seems to be the best for our case. I did what you suggested but something is not working as expected.
I configured a new node on default storage group on main server.This how configured it :
- ip : ip of the branch server
- image path : /mnt/FOG same as it is configured on the branch server
- ftp path : same
- interface : ens160 (outgoing interface of the main server)
- management username : fogproject
- management password : same as it is configured on branch server.
All other settings left as default.
On the branch server default storage node : master is now unchecked.
I can see the new storage on main server dashboard with space available.
But the replication is not working as expected this is what I can see in the logs :
[06-18-19 9:09:06 pm] # W10_Remote-20190523-HS-745-755: File does not exist d1.mbr (FR3-FOG-01) [06-18-19 9:09:06 pm] # W10_Remote-20190523-HS-745-755: File does not exist d1.minimum.partitions (FR3-FOG-01) [06-18-19 9:09:06 pm] # W10_Remote-20190523-HS-745-755: File does not exist d1.original.fstypes (FR3-FOG-01) [06-18-19 9:09:06 pm] # W10_Remote-20190523-HS-745-755: File does not exist d1.original.swapuuids (FR3-FOG-01) [06-18-19 9:09:06 pm] # W10_Remote-20190523-HS-745-755: File does not exist d1.partitions (FR3-FOG-01) [06-18-19 9:09:06 pm] # W10_Remote-20190523-HS-745-755: File does not exist d1p1.img (FR3-FOG-01) [06-18-19 9:09:06 pm] # W10_Remote-20190523-HS-745-755: File does not exist d1p2.img (FR3-FOG-01) [06-18-19 9:09:06 pm] | CMD: lftp -e 'set xfer:log 1; set xfer:log-file /opt/fog/log/fogreplicator.W10_Remote-20190523-HS-745-755.transfer.FR3-FOG-01.log;set ftp:list-options -a;set net:max-retries 10;set net:timeout 30; mirror -c --parallel=20 -R --ignore-time -vvv --exclude ".srvprivate" "/mnt/linux_iSCSI/FOG/W10_Remote-20190523" "/mnt/FOG/W10_Remote-20190523"; exit' -u fogproject,[Protected] 10.69.0.11 [06-18-19 9:09:06 pm] | Started sync for Image W10_Remote-20190523-HS-745-755 - Resource id #93271 [06-18-19 9:09:06 pm] * Found Image to transfer to 1 node
Any idea of what is going wrong?
-
Forget it was a right issue on branch server.
-
I read that putting a node as master with empty storage could wipe all nodes in this storage group. does this mean that a new capture should only be done on master storage node otherwise it will be erased on the node where it has been created?
-
@processor Warning I’m not 100% sure on this answer, but the master node will only react upon images it knows about. So on your remote fog server (configured as a storage node) you should be able to make additional images on the remote fog server without the master node knowing or erasing them. The fog replicator only reacts to the image definitions stored in the fog database on the master node (as long as you don’t have the same named images captured on the remote fog server as the master fog server).
-
I’ve though about this a little, for testing purposes we can do this.
- On the HQ fog server, using a linux command prompt key in
sudo mkdir /images/tom sudo touch /images/tom/sample.txt
- On the remote fog server, using a linux command prompt key in
sudo mkdir /images/sam sudo touch /images/sam/sample.txt
- On the HQ fog server create a image name of
test
using the web ui. - Back on the HQ fog server’s linux console key in
sudo touch /images/test/sample.txt
- Now let the replicator run.
A successful test will have on the HQ fog server in the /images directory
/images/tom /images/test
On the remote FOG server in the /images directory you should have
/images/sam /images/test
That will tell us
- The replicator only acts upon files where it has an image definition in the database.
- It will not step on images you captured on your remote server that are not in the HQ fog servers database.
-
I have only one capture on the branch with which we are doing tests, we will see if the image is deleted or not.
Do you know if it’s possible to schedule replications at specific times? because we have about 2.5To to sync and I would like to do it on night shifts. -
@processor said in FOG : Main sites and Branches organisation:
Do you know if it’s possible to schedule replications at specific times?
Yes but its not native to FOG. In short you will use cron to stop and start the fog replicator service when you want replicated to run. I think I have a tutorial out there on how to do that. Let me check.
-
Well I was close, it was for setting up dynamic transfer rates. But the concept is almost the same. In your case you will want to issue the command to stop and start the fog image replicator.
https://forums.fogproject.org/topic/9449/dynamic-fog-replicator-transfer-rates
-
Thanks, I’ll give a look at it this weekend.