Multicasting - Not The Master Node
-
@Tom-Elliott
What’s the easiest way to verify? -
Well I can tell you that when I navigate to our 172.16.1.22/fog in the web browser it says this…
This is a storage node, please do not access the web ui here!
-
@Joe-Gill so 22 is supposed to be the master node but the logs you’re giving us are for .17? Is 17 the main GUI or the storage node? Please understand I’m trying to help but getting totally conflicting information.
-
@Tom-Elliott
Interesting…
17 is supposed to be the master node.
22 was set to the master node somehow.17 is the main GUI.
22 is the storage node.I think I know what happened. Initially I had a FOG server, a FOG storage node, and a FREENAS storage server setup. FREENAS was giving me problems so I nixed it. When that happened I reset my Storage Node on the server. I believe I misunderstood the syntax of what the Storage Node IP was on the gui and put the wrong IP in there.
Does that help?
-
Any thoughts on this?
I’ve been stuck unicasting all day… My 10 machines still haven’t finished cloning… This lab is only on a 100MB switch though. ugh!
-
@Joe-Gill My my thought are this:
Why do you have 2 fog servers setup on the same subnet? There must be a technical reason for this?
What precisely do you have configured for dhcp option 66 {next-server}?
If you have dhcp option 66 pointing towards 172.16.1.17 how will the storage node 172.16.1.22 ever be used if the location plugin is not installed and the clients homed to that server?
But more to the point why do you have 2 fog (devices) installed on the same subnet?
-
As for the 2 FOG devices on the same subnet… I had no idea it was an issue. We have plenty (256) of subnets available. I never had a single problem multicasting last year with this setup like this. Only occasionally would I have to go in and manually clear the FOG que when I had something else (bad network cable / task didn’t delete properly / ect) fail and needed to reset things.
We don’t have the FOG server serving DHCP. We have a Windows server doing this (I’d rather go Linux but the other admin does not.) So as for option 66, I don’t know. The only thing we set on the DHCP server was for PXE to point to our server IP (172.16.1.17) in DHCP.
-
@Joe-Gill said in Multicasting - Not The Master Node:
As for the 2 FOG devices on the same subnet… I had no idea it was an issue.
It’s not an issue to have 2 fog servers on one subnet. The question is why (technically) do you feel you need to servers?
he only thing we set on the DHCP server was for PXE to point to our server IP (172.16.1.17) in DHCP
So only the Master fog server is being used for pxe booting.
Unless I’m missing something I don’t see your storage node being used at all. Since you don’t have the location plugin, I don’t see the target computers will know about the storage node.
-
Initially when I set everything up, I was under the impression that you got better overall performance having a separate storage node. Also, I thought it was necessary in order to do MulitCasting. If you can do everything from 1 server that’s great news!
What is the purpose of a FOG Storage Node? I thought that was just to store images? One thing that I may do is migrate the Storage Node to a different physical location in order to increase performance but that won’t happen this Summer. For right now my big issue is that all of my images sit on the storage node Images directory and none reside on my server.
Yes only the Master Node (172.16.1.17) is being used to PXE boot. Everything works great there.
On a side note… I will be building a Debian server to house my FOG server in mid June. So I can clean all of this up then. I would love to know how to improve things as this is all still new to me and I am still learning.
-
@Joe-Gill Storage nodes are typically used for one of two things afaik, to serve different subnets/physical locations and/or to create a seperate storage group with different images attached to it.
One main FOG server can do basically everything on its own, but of course serving other subnets and what not would be trickier afaik.
-
@Quazz
All of our network is physically connected with fiber on a large star. So I am thinking one server would do just fine in our application. I guess my thought was to have redundancy in where my images were and that was why else I did things that way. Also I thought it improved overall performance of the imaging process as far as uni-casting was concerned.What would be the best scenario to move my images back to my server?
What’s the safest method to do this?
-
@Joe-Gill If you’re unicasting, then yes, you could get performance benefits if you spread the load out.
If they’re both part of the same storage group and the node with the images on is designated master node, then it should sync the images to the server. This will take some time of course. I believe it checks every 10 minutes by default if it needs to sync.
-
So when I was looking at this last night, all I saw listed in Storage Nodes was my FOG Storage Node where my images are currently stored. I went ahead and added a new Storage Node and called it Server. I set the IP to the server IP. The group defaulted to the same group as the other FOG Storage Node. I left that over night and checked it this morning. I still do not see any images. As I thought they would sync up and be fine.
I’m currently checking with the other admin to get a list of how our network is setup as far as subnets go. He handles all of that stuff and I am a new admin. So I’m still learning how he set that up. I know enough to do what I do and let him handle the rest.
I’m headed out for the day now but will check on this later for suggestions. Ideally if I could move those images this weekend that would save me time next week. So any ideas here would be very helpful.
Thanks everyone for the ideas and feedback!
-
@Joe-Gill said in Multicasting - Not The Master Node:
For right now my big issue is that all of my images sit on the storage node Images directory and none reside on my server.
I’m not sure how that’s possible. Images are replicated from Master -> Storage node. There should be no way to have your images on the storage node without being also on the master.
? One thing that I may do is migrate the Storage Node to a different physical location in order to increase performance but that won’t happen this Summer.
This would be a valid use case to move imaging away from or across a WAN connection.
On a side note… I will be building a Debian server to house my FOG server in mid June. So I can clean all of this up then. I would love to know how to improve things as this is all still new to me and I am still learning.
When you get to this point and have it up and running. Let us know we can/will help you knit your images back together on one server.
If you are doing multiple unicast images simultaneously then setting up a 2 or 4 port lag between your core switch and FOG will help some. If you are going to use multicasting then a single nic will be ok. I haven’t use a LAG group in a multicasting setup as of now so I can’t say how will it will work. It should, but I’d like to see it first.
-
Quite honestly, if all else fails, you could also just copy them over using your favorite cross server copy utility.
Just to clarify, which node is designated master now? It should be the one with the images.
You should have an imagereplication error log though, what does it say?
-
@Quazz said in Multicasting - Not The Master Node:
you could get performance benefits if you spread the load out.
AFAIK FOG 1.4.2 doesn’t have the ability to spread the load among all devices in a storage group. Its not totally clear to me that the max clients count will cause FOG to roll over to the next storage node in a storage group.
For example lets say you have for a master, max clients setup to 5 and a storage node in the same storage group has a max client of 5. Now lets say we decide to image the 6th client at the same time the other five are deploying on the master node. I’m speculating that 6th client will enter a hold wait state waiting for one of the 5 slots to become available, even if the storage node is sitting with zero clients.
We may need one of the @Developers to comment on this, but I don’t believe that FOG will load balance between the storage nodes in the same storage group. Right now IT admins will have to do that manually using the location plugin to allocate certain computers to use certain storage nodes.
-
Last Summer when I was using 1.3.5 (I believe), it would roll over to the next node when you hit the Node’s max client count
@Quazz
I do have the designated master node set to the storage node with the images located on it.The Replicator log on the FOG Server says the same thing the mulitcast log said. “Not The Master Node…”
I’ll transfer them over this weekend. FTP work just as well as any?
Thanks!!
-
@Joe-Gill said in Multicasting - Not The Master Node:
I’ll transfer them over this weekend. FTP work just as well as any?
Actually ftp is what FOG uses to replicate images between master and storage nodes. So you will be right to use FTP.
-
@george1421
SMH… This couldn’t be without issue. So I tried the FTP transfer over the weekend and quickly discovered that my server was very low on space.This morning I added more space to the VM… I added the space in the form of a new Virtual Disk. I partitioned that space…
I have SDB1 (my old /images directory on the FOG server) and SDC1 (newly partitioned space. Both virtual disks are the same file system type (83)).
Next I installed mhddfs utility… I unmounted my current mounting point. Removed that entry from fstab… Then I mounted both disks to the /images directory.
My size is now incorrect on the FOG node display for the server. I don’t get any size showing up there…
Am I missing something here?
The good news is that my images did start to migrate over once I freed up some space… So… That works.
-
@Joe-Gill Sorry I can’t remember this, but did I create the /dev/sdb1 disk and mount it to your fog server? (if so I created it a certain way that could be expanded)
If not, if you did it did you create it with a single partition (not lvm)? If so you can expand the virtual disk in your hypervisor and then extend your partition and finally the filesystem without needing to add another disk and leap frog the disks into place.