FogReplicator and Storage Nodes.
-
@Tom-Elliott @george1421
I have recreated the nodes, and had someone at the MHB office try and deploy an image(W7P-HP6300). The image was originally created on the SRO server, but synced to the MHB server months ago. I had the machine register with fog, on the MHB(10.63.57.42) server. On a side note the MHB office has 2 network ranges 10.63.57.x and 10.63.65.x). After registering the server, we went in to the MHB fog web gui, set the Image to the W7P-HP6300 image, Tasks, basic, deploy. We then rebooted the machine and selected pxe boot. This started pulling down the image, but a lot slower than we would expect for a gigabit network. This machine was pulling the Image from SRO over our 100mb MPLS network.We have confirmed that both DHCP pools point to the 10.63.57.42 server.
I dumped the fog database on MHB and the only time 10.63.76.44 is mentioned is in the node defenition previously shown
How does a machine being imaged, boot from the fog server in the same office, register to that server, have an image pushed to it from that server, yet the traffic comes from our main server in SRO?
W7P-HP6300 on MHB
imageID: 10 imageName: W7P-HP6300 imageDesc: Conference Room PC HP ProDesk 6300 Office Sysprep imagePath: W7PHP6300 imageProtect: 0 imageMagnetUri: imageDateTime: 2017-05-12 13:28:34 imageCreateBy: sbenson imageBuilding: 0 imageSize: 104853504.000000:27798433792.000000: imageTypeID: 1 imagePartitionTypeID: 1 imageOSID: 5 imageFormat: imageLastDeploy: 0000-00-00 00:00:00 imageCompress: 1 imageEnabled: 1 imageReplicate: 1
W7P-HP6300 on SRO
imageID: 5 imageName: W7P-HP6300 imageDesc: Conference Room PC HP ProDesk 6300 Office Sysprep imagePath: W7PHP6300 imageProtect: 0 imageMagnetUri: imageDateTime: 2016-10-24 21:41:02 imageCreateBy: sbenson imageBuilding: 0 imageSize: 104853504.000000:6786727.000000:27798433792.000000: imageTypeID: 1 imagePartitionTypeID: 1 imageOSID: 5 imageFormat: imageLastDeploy: 2016-10-24 21:57:14 imageCompress: 1 imageEnabled: 1 imageReplicate: 1
[13:53:20] root@MHB-FOG-01[0]:/opt$ cd fog/ [13:53:21] root@MHB-FOG-01[0]:/opt/fog$ grep 10.63.76.44 -r * [13:53:31] root@MHB-FOG-01[0]:/opt/fog$ cd /var/www/html/fog/ [13:53:44] root@MHB-FOG-01[0]:/var/www/html/fog$ grep 10.63.76.44 -r * [13:54:49] root@MHB-FOG-01[0]:/images$ cd W7PHP6300/ [13:55:00] root@MHB-FOG-01[0]:/images/W7PHP6300$ ls -lsa total 13031508 4 drwxrwxrwx 2 fog root 4096 Oct 24 2016 . 4 drwxrwxr-x 14 fog root 4096 May 5 17:08 .. 4 -rwxrwxrwx 1 fog root 3 Oct 24 2016 d1.fixed_size_partitions 1024 -rwxrwxrwx 1 fog root 1048576 Oct 24 2016 d1.mbr 4 -rwxrwxrwx 1 fog root 190 Oct 24 2016 d1.minimum.partitions 4 -rwxrwxrwx 1 fog root 15 Oct 24 2016 d1.original.fstypes 0 -rwxrwxrwx 1 fog root 0 Oct 24 2016 d1.original.swapuuids 9172 -rwxrwxrwx 1 fog root 9390413 Oct 24 2016 d1p1.img 13021288 -rwxrwxrwx 1 fog root 13333793671 Oct 24 2016 d1p2.img 4 -rwxrwxrwx 1 fog root 190 Oct 24 2016 d1.partitions [13:55:04] root@MHB-FOG-01[0]:/images/W7PHP6300$ grep 10.63.76.44 d1\.*
-
@sbenson said in FogReplicator and Storage Nodes.:
I dumped the fog database on MHB and the only time 10.63.76.44 is mentioned is in the node defenition previously shown
Just to ensure we are on the same page here. MHB FOG server should not have ANY reference to SRO fog server / site or anything. SRO should not be defined in any storage group or storage node on MHB. If this is the case there is nothing to tell any client pxe booting at MHB that the SRO fog server exists.
If you have confirmed this, then we must dig into the MHB fog server configuration because this is an oddity that should never exist.
-
Quick design as I see it in my head:
SRO is “main” server.
MHB is the “slave” server.SRO Should have a Storage group containing information about both the Main (SRO) and Slave (MHB).
The “Master” node in the “Main” server will be the SRO Node. MHB will just be a part of the group.
All images associated on SRO will only need to be on the SRO Storage Group (this will cause the SRO Server replicate its images to MHB). All that’s needed here is the IP of the MHB and the FOG linux user and password.
The MHB Server will have it’s own group. It also has its own Database.
The MHB Server will also contain its own Node as master. There should not be any need for any other storage nodes on the MHB Server.
The SRO Server should have the Image definitions exported which can then be uploaded to the MHB Server. No need for a database dump as the only group either server will ever have should only be with ID 1 (if nothing has been changed otherwise of course).
This should be all that’s required. When the client machines attempt booting from the MHB network, they should ONLY request the information from the MHB Server and node.
-
-
@Tom-Elliott how does a “storage group” contain information about the nodes? It seems as if each node has to be told it is part of a specific storage group(so it’s a little backward). Also will it cause a problem if the storage groups on both servers are the “Default” or will it require a separate group on each server?
-
@sbenson Because you have two separate servers, the only reason MHB exists on the SRO Server is so it can replicate the SRO Node’s images down to the MHB Node.
There is ONLY 1 storage group needed for each “server”. It doesn’t matter what you name the groups, the servers are independent of each other.
The only reason you have the MHB on the SRO Server is so you can manage your images as far as I understand what your setup is.
-
@sbenson for the SRO Groups MHB Node, set the max clients to 0 so images won’t try to come from the MHB Server when imaging within the SRO side of things btw.
-
I would expect your setup to look something like this
For SRO
For MHB
This configuration will allow images to flow SRO -> MHB -> MHB_storage_node
-
@george1421 In the past I was under the impression that you were saying 2 Nodes on each host, one Master and Slave per each. So SRO-Master@SRO -> SRO-Slave@MHB, and MHB-Master@MHB -> MHB-Slave@SRO. Kind of providing a round robin syncing. This has been changed to match your setup. I do have a question about the passwords in this setup
MHB-Master@MHB will have the PW from
/opt/fog/.fogsettings
username=‘fog’
password=“passwordA”MHB-Storage_Node@MHB will have the PW from SRO:/opt/fog/.fogsettings
SRO-Master@SRO will have SRO’s
and MHB-Slave@SRO will have MHB’s? -
On SRO
mysql> select ngmMemberName as name,ngmIsMasterNode as master, ngmHostname as host,ngmUser as user, left(ngmPass,5) as pass from nfsGroupMembers; +------------+--------+-------------+------+-------+ | name | master | host | user | pass | +------------+--------+-------------+------+-------+ | MHB-Slave | | 10.63.57.42 | fog | BaLVo | | SRO-Master | 1 | 10.63.76.44 | fog | RoiYx | +------------+--------+-------------+------+-------+
On MHB
mysql> select ngmMemberName as name,ngmIsMasterNode as master, ngmHostname as host,ngmUser as user, left(ngmPass,5) as pass from nfsGroupMembers; +------------------+--------+-------------+------+-------+ | name | master | host | user | pass | +------------------+--------+-------------+------+-------+ | MHB-Master | 1 | 10.63.57.42 | fog | BaLVo | | MHB-Storage_Node | | 10.63.76.44 | fog | RoiYx | +------------------+--------+-------------+------+-------+
-
@sbenson While this is a bit of a different way to look at it. That now looks correct.
You only need to manually set the user ID and password on MHB-Slave because cause SRO-Master and MHB-Master have their own databases. In the case of MHB-Master and MHB-Storage_node they share the same database (on MHB-Master). So as soon as you add the storage node in the fog setup, the MHB-Master knows automatically about the MHB-Storage_node.
With this setup if your target computer pulls any images from SRO-Master then we are going to have to dig deep into your MHB server setup. Because there is NO way for a target computer at MHB to even know that SRO exists.
-
@sbenson I don’t want to confuse the subject because we are talking about SRO and MHB cross pollination. But at your MHB site, you will need to load the location plugin. This will allow you to direct your clients (at MHB) to either the MHB-Master or MHB-Storage_Node. Which ever is proper based on the subnet.
BUT… I don’t want to go that far until we ensure that MHB clients are only imaging from MHB servers.
-
@george1421 But the userid and password are required…or atleast they have an asterisk next to them making it seem like they are. I did notice a problem, the interfaces on these boxes aren’t eth0, they are ens160 on both. I can’t imagine that would cause the traffic to magically be routed to the SRO machine.
-
@sbenson
WAIT, I just saw a flaw in your design!!!
For MHB-Storage_node that has the same IP address as your SRO-Master. Is this by design?? If so, its wrong. You will create a replication loop and confuse the target computers.This right here will might cause the clients at MHB to talk to the SRO server. Because it tells the clients at MHB there is a second server in MHB, which is actually at SRO.
mysql> select ngmMemberName as name,ngmIsMasterNode as master, ngmHostname as host,ngmUser as user, left(ngmPass,5) as pass from nfsGroupMembers; +------------------+--------+-------------+------+-------+ | name | master | host | user | pass | +------------------+--------+-------------+------+-------+ | MHB-Storage_Node | | 10.63.76.44 | fog | RoiYx |
-
@sbenson said in FogReplicator and Storage Nodes.:
@george1421 But the userid and password are required…or atleast they have an asterisk next to them making it seem like they are. I did notice a problem, the interfaces on these boxes aren’t eth0, they are ens160 on both. I can’t imagine that would cause the traffic to magically be routed to the SRO machine.
The network interfaces need to be correct for multicasting. Unicast images don’t use the network interface.
-
@george1421
SROmysql> select ngmMemberName as name,ngmIsMasterNode as master, ngmHostname as host,ngmUser as user, left(ngmPass,5) as pass, ngmInterface as interface from nfsGroupMembers; +------------+--------+-------------+------+-------+-----------+ | name | master | host | user | pass | interface | +------------+--------+-------------+------+-------+-----------+ | MHB-Slave | | 10.63.57.42 | fog | BaLVo | ens160 | | SRO-Master | 1 | 10.63.76.44 | fog | RoiYx | ens160 | +------------+--------+-------------+------+-------+-----------+ 2 rows in set (0.00 sec)
MHB
mysql> select ngmMemberName as name,ngmIsMasterNode as master, ngmHostname as host,ngmUser as user, left(ngmPass,5) as pass, ngmInterface as interface from nfsGroupMembers; +------------------+--------+-------------+------+-------+-----------+ | name | master | host | user | pass | interface | +------------------+--------+-------------+------+-------+-----------+ | MHB-Master | 1 | 10.63.57.42 | fog | BaLVo | ens160 | | MHB-Storage_Node | | 10.63.57.42 | fog | BaLVo | ens160 | +------------------+--------+-------------+------+-------+-----------+ 2 rows in set (0.00 sec)
-
@sbenson Unless you need it, the MHB Server does not need a second node. There is literally no point for it.
-
@sbenson OK we need to get something cleared up. Tom and I have been chatting and we need to understand. At site MHB how many physical fog servers are installed master or slave nodes. I think we’ve been adding complexity because I misunderstood something.
-
@Tom-Elliott said in FogReplicator and Storage Nodes.:
Unless you need it, the MHB Server does not need a second node. There is literally no point for it.
It’s only there because George said so
@george1421 said in FogReplicator and Storage Nodes.:
OK we need to get something cleared up. Tom and I have been chatting and we need to understand. At site MHB how many fog servers are installed master or slave nodes. I think we’ve been adding complexity because I misunderstood something
There are a TOTAL of 2 servers in the whole company, SRO-FOG-01 and MHB-FOG-01. Both of these machines are installed on our vmware infrastructures(no not the same ESXI hosts).
-
@sbenson said in FogReplicator and Storage Nodes.:
There are a TOTAL of 2 servers in the whole company, SRO-FOG-01 and MHB-FOG-01
Well for that I’m sorry. Somewhere along the way I thought you said you had two physical fog servers at MHB, because you had two subnets there. I didn’t question it.
Delete the slave node on the MHB fog server and then things will straighten out.