FogReplicator and Storage Nodes.
-
@sbenson Because you have two separate servers, the only reason MHB exists on the SRO Server is so it can replicate the SRO Node’s images down to the MHB Node.
There is ONLY 1 storage group needed for each “server”. It doesn’t matter what you name the groups, the servers are independent of each other.
The only reason you have the MHB on the SRO Server is so you can manage your images as far as I understand what your setup is.
-
@sbenson for the SRO Groups MHB Node, set the max clients to 0 so images won’t try to come from the MHB Server when imaging within the SRO side of things btw.
-
I would expect your setup to look something like this
For SRO
For MHB
This configuration will allow images to flow SRO -> MHB -> MHB_storage_node
-
@george1421 In the past I was under the impression that you were saying 2 Nodes on each host, one Master and Slave per each. So SRO-Master@SRO -> SRO-Slave@MHB, and MHB-Master@MHB -> MHB-Slave@SRO. Kind of providing a round robin syncing. This has been changed to match your setup. I do have a question about the passwords in this setup
MHB-Master@MHB will have the PW from
/opt/fog/.fogsettings
username=‘fog’
password=“passwordA”MHB-Storage_Node@MHB will have the PW from SRO:/opt/fog/.fogsettings
SRO-Master@SRO will have SRO’s
and MHB-Slave@SRO will have MHB’s? -
On SRO
mysql> select ngmMemberName as name,ngmIsMasterNode as master, ngmHostname as host,ngmUser as user, left(ngmPass,5) as pass from nfsGroupMembers; +------------+--------+-------------+------+-------+ | name | master | host | user | pass | +------------+--------+-------------+------+-------+ | MHB-Slave | | 10.63.57.42 | fog | BaLVo | | SRO-Master | 1 | 10.63.76.44 | fog | RoiYx | +------------+--------+-------------+------+-------+
On MHB
mysql> select ngmMemberName as name,ngmIsMasterNode as master, ngmHostname as host,ngmUser as user, left(ngmPass,5) as pass from nfsGroupMembers; +------------------+--------+-------------+------+-------+ | name | master | host | user | pass | +------------------+--------+-------------+------+-------+ | MHB-Master | 1 | 10.63.57.42 | fog | BaLVo | | MHB-Storage_Node | | 10.63.76.44 | fog | RoiYx | +------------------+--------+-------------+------+-------+
-
@sbenson While this is a bit of a different way to look at it. That now looks correct.
You only need to manually set the user ID and password on MHB-Slave because cause SRO-Master and MHB-Master have their own databases. In the case of MHB-Master and MHB-Storage_node they share the same database (on MHB-Master). So as soon as you add the storage node in the fog setup, the MHB-Master knows automatically about the MHB-Storage_node.
With this setup if your target computer pulls any images from SRO-Master then we are going to have to dig deep into your MHB server setup. Because there is NO way for a target computer at MHB to even know that SRO exists.
-
@sbenson I don’t want to confuse the subject because we are talking about SRO and MHB cross pollination. But at your MHB site, you will need to load the location plugin. This will allow you to direct your clients (at MHB) to either the MHB-Master or MHB-Storage_Node. Which ever is proper based on the subnet.
BUT… I don’t want to go that far until we ensure that MHB clients are only imaging from MHB servers.
-
@george1421 But the userid and password are required…or atleast they have an asterisk next to them making it seem like they are. I did notice a problem, the interfaces on these boxes aren’t eth0, they are ens160 on both. I can’t imagine that would cause the traffic to magically be routed to the SRO machine.
-
@sbenson
WAIT, I just saw a flaw in your design!!!
For MHB-Storage_node that has the same IP address as your SRO-Master. Is this by design?? If so, its wrong. You will create a replication loop and confuse the target computers.This right here will might cause the clients at MHB to talk to the SRO server. Because it tells the clients at MHB there is a second server in MHB, which is actually at SRO.
mysql> select ngmMemberName as name,ngmIsMasterNode as master, ngmHostname as host,ngmUser as user, left(ngmPass,5) as pass from nfsGroupMembers; +------------------+--------+-------------+------+-------+ | name | master | host | user | pass | +------------------+--------+-------------+------+-------+ | MHB-Storage_Node | | 10.63.76.44 | fog | RoiYx |
-
@sbenson said in FogReplicator and Storage Nodes.:
@george1421 But the userid and password are required…or atleast they have an asterisk next to them making it seem like they are. I did notice a problem, the interfaces on these boxes aren’t eth0, they are ens160 on both. I can’t imagine that would cause the traffic to magically be routed to the SRO machine.
The network interfaces need to be correct for multicasting. Unicast images don’t use the network interface.
-
@george1421
SROmysql> select ngmMemberName as name,ngmIsMasterNode as master, ngmHostname as host,ngmUser as user, left(ngmPass,5) as pass, ngmInterface as interface from nfsGroupMembers; +------------+--------+-------------+------+-------+-----------+ | name | master | host | user | pass | interface | +------------+--------+-------------+------+-------+-----------+ | MHB-Slave | | 10.63.57.42 | fog | BaLVo | ens160 | | SRO-Master | 1 | 10.63.76.44 | fog | RoiYx | ens160 | +------------+--------+-------------+------+-------+-----------+ 2 rows in set (0.00 sec)
MHB
mysql> select ngmMemberName as name,ngmIsMasterNode as master, ngmHostname as host,ngmUser as user, left(ngmPass,5) as pass, ngmInterface as interface from nfsGroupMembers; +------------------+--------+-------------+------+-------+-----------+ | name | master | host | user | pass | interface | +------------------+--------+-------------+------+-------+-----------+ | MHB-Master | 1 | 10.63.57.42 | fog | BaLVo | ens160 | | MHB-Storage_Node | | 10.63.57.42 | fog | BaLVo | ens160 | +------------------+--------+-------------+------+-------+-----------+ 2 rows in set (0.00 sec)
-
@sbenson Unless you need it, the MHB Server does not need a second node. There is literally no point for it.
-
@sbenson OK we need to get something cleared up. Tom and I have been chatting and we need to understand. At site MHB how many physical fog servers are installed master or slave nodes. I think we’ve been adding complexity because I misunderstood something.
-
@Tom-Elliott said in FogReplicator and Storage Nodes.:
Unless you need it, the MHB Server does not need a second node. There is literally no point for it.
It’s only there because George said so
@george1421 said in FogReplicator and Storage Nodes.:
OK we need to get something cleared up. Tom and I have been chatting and we need to understand. At site MHB how many fog servers are installed master or slave nodes. I think we’ve been adding complexity because I misunderstood something
There are a TOTAL of 2 servers in the whole company, SRO-FOG-01 and MHB-FOG-01. Both of these machines are installed on our vmware infrastructures(no not the same ESXI hosts).
-
@sbenson said in FogReplicator and Storage Nodes.:
There are a TOTAL of 2 servers in the whole company, SRO-FOG-01 and MHB-FOG-01
Well for that I’m sorry. Somewhere along the way I thought you said you had two physical fog servers at MHB, because you had two subnets there. I didn’t question it.
Delete the slave node on the MHB fog server and then things will straighten out.
-
@sbenson So:
SRO side Needs:
Master (SRO-FOG-01)
Slave (MHB-FOG-01)MHB Side Needs:
Master (MHB-FOG-01) -
@george1421 said in FogReplicator and Storage Nodes.:
The network interfaces need to be correct for multicasting. Unicast images don’t use the network interface.
Just trying to clarify:
FOG uses the interface for the bandwidth page. For multicast it uses an auto detection type system now.
-
@Tom-Elliott said in FogReplicator and Storage Nodes.:
So:
SRO side Needs:
Master (SRO-FOG-01)
Slave (MHB-FOG-01)
MHB Side Needs:
Master (MHB-FOG-01)Done, SRO has
| MHB-Slave | | 10.63.57.42 | fog | BaLVo | ens160 | | SRO-Master | 1 | 10.63.76.44 | fog | RoiYx | ens160 |
MHB has
| MHB-Master | 1 | 10.63.57.42 | fog | BaLVo | ens160 |
-
@sbenson This should then be good to go.
I’d say, from the SRO Master server, run
systemctl restart FOGImageReplicator FOGSnapinReplicator
just to make sure things are good to go and things will start replicating (unless you need to wait until later on.) -
@Tom-Elliott That probably should be done on the MHB server too just to flush out any cached systems since we deleted a node.