FogReplicator and Storage Nodes.
-
@sbenson said in FogReplicator and Storage Nodes.:
Then possibly the only thing we would have to do after a sync is export the images(txt file) on SRO, and import them on the MHB server
Yes this is correct export from SRO and import into MHB.
The developers are working on a way for this setup to automatically update in a future release of fog, maybe 1.4.3 or 1.4.4, but I can’t say. They are creating a new API call that will make this process a bit easier.
-
@george1421 said in FogReplicator and Storage Nodes.:
In this case SRO server should ONLY have the MHB fog server, with the SRO fog server setup as master. All others not the master. This will tell the Master FOG server to replicate to everyone else that is not a master.
On the MHB fog server I think you have one storage node, If that is the case create the MHB fog server as the master node and your MHB storage node as not the master.
This setup will then only sync SRO FOG server -> MHB fog server and MHB Fog server -> MHB Storage node. That way there is only one replication across the WAN.Sorry for the delay in getting back to you, been busy with other projects. So fog servers pull from the master node, not get pushes from the master node?
So on SRO I create an SRO node where it is the master, it’s own IP and set a user/pass for it
On MHB I set MHB as a master node(storing images in /images?) and a SRO as a client node(also storing images in /images?)Edit, wait that is totally backwards from what you just said, let me re-work this
-
@sbenson Master nodes push. That’s it.
-
SRO/MASTER
ngmID: 4 ngmMemberName: SRO ngmMemberDescription: Local fog storage in SRO ngmIsMasterNode: 1 ngmGroupID: 1 ngmRootPath: /images ngmSSLPath: /opt/fog/snapins/ssl ngmFTPPath: /images ngmMaxBitrate: ngmSnapinPath: /opt/fog/snapins ngmIsEnabled: 1 ngmHostname: x.x.76.44 ngmMaxClients: 10 ngmBandwidthLimit: 2000 ngmUser: sync ngmPass: Some PasswordA ngmKey: ngmInterface: eth0 ngmGraphEnabled: 1 ngmWebroot: /fog
MHB slave
ngmID: 4 ngmMemberName: MHB ngmMemberDescription: ngmIsMasterNode: 1 ngmGroupID: 1 ngmRootPath: /images ngmSSLPath: /opt/fog/snapins/ssl ngmFTPPath: /images ngmMaxBitrate: ngmSnapinPath: /opt/fog/snapins ngmIsEnabled: 1 ngmHostname: x.x.57.42 ngmMaxClients: 1 ngmBandwidthLimit: 1 ngmUser: NONVALIDUSER ngmPass: Gibberish password ngmKey: ngmInterface: eth0 ngmGraphEnabled: 1 ngmWebroot: /fog *************************** 2. row *************************** ngmID: 5 ngmMemberName: SRO ngmMemberDescription: ngmIsMasterNode: ngmGroupID: 1 ngmRootPath: /images ngmSSLPath: /opt/fog/snapins/ssl ngmFTPPath: /images ngmMaxBitrate: ngmSnapinPath: /opt/fog/snapins ngmIsEnabled: 1 ngmHostname: x.x.76.44 ngmMaxClients: 10 ngmBandwidthLimit: 20000 ngmUser: sync ngmPass: Some PasswordA ngmKey: ngmInterface: eth0 ngmGraphEnabled: 1 ngmWebroot: /fog
Does this look correct?
-
@sbenson Well I don’t know what the user/password pair is, but they should be the Linux user with FTP permissions.
-
@sbenson no if I’m understanding what I see that is not correct.
On your SRO (master node).
- You should have 1 storage group.
- That storage group should contain 2 storage nodes.
- The first storage node should of course be the SRO node defined as the master.
- The second storage node should be MHB fog server. The user ID and password for the MHB server will be found on the MHB fog server in the /opt/fog/.fogsettings file. This user ID and password are required because the master node transfers the images from the master node to the defined storage nodes over FTP. This account is used to log into the remote fog server to start the ftp transfer.
That is all you need for SRO. Once you do that after a short timeout SRO should start replicating the images to MHB (when the fog replicator service has been enabled).
Now on MHB.
- You should have one storage group
- You should have 2 storage nodes defined in that storage group.
- One storage node should be the MHB server configured as a master server.
- The second storage node should be the FOG storage node server in MHB.
- As soon as your images from SRO exist on MHB and you import the image definitions from SRO, the MHB Fog server will start replicating the images to the storage node in MHB.
It sounds complicated but not really. In each storage group you can only have one master node. Images are only replicated from the master node to all storage nodes in the storage group. Your master node can be members of one or more storage groups.
-
@Tom-Elliott @george1421
I have recreated the nodes, and had someone at the MHB office try and deploy an image(W7P-HP6300). The image was originally created on the SRO server, but synced to the MHB server months ago. I had the machine register with fog, on the MHB(10.63.57.42) server. On a side note the MHB office has 2 network ranges 10.63.57.x and 10.63.65.x). After registering the server, we went in to the MHB fog web gui, set the Image to the W7P-HP6300 image, Tasks, basic, deploy. We then rebooted the machine and selected pxe boot. This started pulling down the image, but a lot slower than we would expect for a gigabit network. This machine was pulling the Image from SRO over our 100mb MPLS network.We have confirmed that both DHCP pools point to the 10.63.57.42 server.
I dumped the fog database on MHB and the only time 10.63.76.44 is mentioned is in the node defenition previously shown
How does a machine being imaged, boot from the fog server in the same office, register to that server, have an image pushed to it from that server, yet the traffic comes from our main server in SRO?
W7P-HP6300 on MHB
imageID: 10 imageName: W7P-HP6300 imageDesc: Conference Room PC HP ProDesk 6300 Office Sysprep imagePath: W7PHP6300 imageProtect: 0 imageMagnetUri: imageDateTime: 2017-05-12 13:28:34 imageCreateBy: sbenson imageBuilding: 0 imageSize: 104853504.000000:27798433792.000000: imageTypeID: 1 imagePartitionTypeID: 1 imageOSID: 5 imageFormat: imageLastDeploy: 0000-00-00 00:00:00 imageCompress: 1 imageEnabled: 1 imageReplicate: 1
W7P-HP6300 on SRO
imageID: 5 imageName: W7P-HP6300 imageDesc: Conference Room PC HP ProDesk 6300 Office Sysprep imagePath: W7PHP6300 imageProtect: 0 imageMagnetUri: imageDateTime: 2016-10-24 21:41:02 imageCreateBy: sbenson imageBuilding: 0 imageSize: 104853504.000000:6786727.000000:27798433792.000000: imageTypeID: 1 imagePartitionTypeID: 1 imageOSID: 5 imageFormat: imageLastDeploy: 2016-10-24 21:57:14 imageCompress: 1 imageEnabled: 1 imageReplicate: 1
[13:53:20] root@MHB-FOG-01[0]:/opt$ cd fog/ [13:53:21] root@MHB-FOG-01[0]:/opt/fog$ grep 10.63.76.44 -r * [13:53:31] root@MHB-FOG-01[0]:/opt/fog$ cd /var/www/html/fog/ [13:53:44] root@MHB-FOG-01[0]:/var/www/html/fog$ grep 10.63.76.44 -r * [13:54:49] root@MHB-FOG-01[0]:/images$ cd W7PHP6300/ [13:55:00] root@MHB-FOG-01[0]:/images/W7PHP6300$ ls -lsa total 13031508 4 drwxrwxrwx 2 fog root 4096 Oct 24 2016 . 4 drwxrwxr-x 14 fog root 4096 May 5 17:08 .. 4 -rwxrwxrwx 1 fog root 3 Oct 24 2016 d1.fixed_size_partitions 1024 -rwxrwxrwx 1 fog root 1048576 Oct 24 2016 d1.mbr 4 -rwxrwxrwx 1 fog root 190 Oct 24 2016 d1.minimum.partitions 4 -rwxrwxrwx 1 fog root 15 Oct 24 2016 d1.original.fstypes 0 -rwxrwxrwx 1 fog root 0 Oct 24 2016 d1.original.swapuuids 9172 -rwxrwxrwx 1 fog root 9390413 Oct 24 2016 d1p1.img 13021288 -rwxrwxrwx 1 fog root 13333793671 Oct 24 2016 d1p2.img 4 -rwxrwxrwx 1 fog root 190 Oct 24 2016 d1.partitions [13:55:04] root@MHB-FOG-01[0]:/images/W7PHP6300$ grep 10.63.76.44 d1\.*
-
@sbenson said in FogReplicator and Storage Nodes.:
I dumped the fog database on MHB and the only time 10.63.76.44 is mentioned is in the node defenition previously shown
Just to ensure we are on the same page here. MHB FOG server should not have ANY reference to SRO fog server / site or anything. SRO should not be defined in any storage group or storage node on MHB. If this is the case there is nothing to tell any client pxe booting at MHB that the SRO fog server exists.
If you have confirmed this, then we must dig into the MHB fog server configuration because this is an oddity that should never exist.
-
Quick design as I see it in my head:
SRO is “main” server.
MHB is the “slave” server.SRO Should have a Storage group containing information about both the Main (SRO) and Slave (MHB).
The “Master” node in the “Main” server will be the SRO Node. MHB will just be a part of the group.
All images associated on SRO will only need to be on the SRO Storage Group (this will cause the SRO Server replicate its images to MHB). All that’s needed here is the IP of the MHB and the FOG linux user and password.
The MHB Server will have it’s own group. It also has its own Database.
The MHB Server will also contain its own Node as master. There should not be any need for any other storage nodes on the MHB Server.
The SRO Server should have the Image definitions exported which can then be uploaded to the MHB Server. No need for a database dump as the only group either server will ever have should only be with ID 1 (if nothing has been changed otherwise of course).
This should be all that’s required. When the client machines attempt booting from the MHB network, they should ONLY request the information from the MHB Server and node.
-
-
@Tom-Elliott how does a “storage group” contain information about the nodes? It seems as if each node has to be told it is part of a specific storage group(so it’s a little backward). Also will it cause a problem if the storage groups on both servers are the “Default” or will it require a separate group on each server?
-
@sbenson Because you have two separate servers, the only reason MHB exists on the SRO Server is so it can replicate the SRO Node’s images down to the MHB Node.
There is ONLY 1 storage group needed for each “server”. It doesn’t matter what you name the groups, the servers are independent of each other.
The only reason you have the MHB on the SRO Server is so you can manage your images as far as I understand what your setup is.
-
@sbenson for the SRO Groups MHB Node, set the max clients to 0 so images won’t try to come from the MHB Server when imaging within the SRO side of things btw.
-
I would expect your setup to look something like this
For SRO
For MHB
This configuration will allow images to flow SRO -> MHB -> MHB_storage_node
-
@george1421 In the past I was under the impression that you were saying 2 Nodes on each host, one Master and Slave per each. So SRO-Master@SRO -> SRO-Slave@MHB, and MHB-Master@MHB -> MHB-Slave@SRO. Kind of providing a round robin syncing. This has been changed to match your setup. I do have a question about the passwords in this setup
MHB-Master@MHB will have the PW from
/opt/fog/.fogsettings
username=‘fog’
password=“passwordA”MHB-Storage_Node@MHB will have the PW from SRO:/opt/fog/.fogsettings
SRO-Master@SRO will have SRO’s
and MHB-Slave@SRO will have MHB’s? -
On SRO
mysql> select ngmMemberName as name,ngmIsMasterNode as master, ngmHostname as host,ngmUser as user, left(ngmPass,5) as pass from nfsGroupMembers; +------------+--------+-------------+------+-------+ | name | master | host | user | pass | +------------+--------+-------------+------+-------+ | MHB-Slave | | 10.63.57.42 | fog | BaLVo | | SRO-Master | 1 | 10.63.76.44 | fog | RoiYx | +------------+--------+-------------+------+-------+
On MHB
mysql> select ngmMemberName as name,ngmIsMasterNode as master, ngmHostname as host,ngmUser as user, left(ngmPass,5) as pass from nfsGroupMembers; +------------------+--------+-------------+------+-------+ | name | master | host | user | pass | +------------------+--------+-------------+------+-------+ | MHB-Master | 1 | 10.63.57.42 | fog | BaLVo | | MHB-Storage_Node | | 10.63.76.44 | fog | RoiYx | +------------------+--------+-------------+------+-------+
-
@sbenson While this is a bit of a different way to look at it. That now looks correct.
You only need to manually set the user ID and password on MHB-Slave because cause SRO-Master and MHB-Master have their own databases. In the case of MHB-Master and MHB-Storage_node they share the same database (on MHB-Master). So as soon as you add the storage node in the fog setup, the MHB-Master knows automatically about the MHB-Storage_node.
With this setup if your target computer pulls any images from SRO-Master then we are going to have to dig deep into your MHB server setup. Because there is NO way for a target computer at MHB to even know that SRO exists.
-
@sbenson I don’t want to confuse the subject because we are talking about SRO and MHB cross pollination. But at your MHB site, you will need to load the location plugin. This will allow you to direct your clients (at MHB) to either the MHB-Master or MHB-Storage_Node. Which ever is proper based on the subnet.
BUT… I don’t want to go that far until we ensure that MHB clients are only imaging from MHB servers.
-
@george1421 But the userid and password are required…or atleast they have an asterisk next to them making it seem like they are. I did notice a problem, the interfaces on these boxes aren’t eth0, they are ens160 on both. I can’t imagine that would cause the traffic to magically be routed to the SRO machine.
-
@sbenson
WAIT, I just saw a flaw in your design!!!
For MHB-Storage_node that has the same IP address as your SRO-Master. Is this by design?? If so, its wrong. You will create a replication loop and confuse the target computers.This right here will might cause the clients at MHB to talk to the SRO server. Because it tells the clients at MHB there is a second server in MHB, which is actually at SRO.
mysql> select ngmMemberName as name,ngmIsMasterNode as master, ngmHostname as host,ngmUser as user, left(ngmPass,5) as pass from nfsGroupMembers; +------------------+--------+-------------+------+-------+ | name | master | host | user | pass | +------------------+--------+-------------+------+-------+ | MHB-Storage_Node | | 10.63.76.44 | fog | RoiYx |