Multicasting - Not The Master Node
-
@Joe-Gill said in Multicasting - Not The Master Node:
For right now my big issue is that all of my images sit on the storage node Images directory and none reside on my server.
I’m not sure how that’s possible. Images are replicated from Master -> Storage node. There should be no way to have your images on the storage node without being also on the master.
? One thing that I may do is migrate the Storage Node to a different physical location in order to increase performance but that won’t happen this Summer.
This would be a valid use case to move imaging away from or across a WAN connection.
On a side note… I will be building a Debian server to house my FOG server in mid June. So I can clean all of this up then. I would love to know how to improve things as this is all still new to me and I am still learning.
When you get to this point and have it up and running. Let us know we can/will help you knit your images back together on one server.
If you are doing multiple unicast images simultaneously then setting up a 2 or 4 port lag between your core switch and FOG will help some. If you are going to use multicasting then a single nic will be ok. I haven’t use a LAG group in a multicasting setup as of now so I can’t say how will it will work. It should, but I’d like to see it first.
-
Quite honestly, if all else fails, you could also just copy them over using your favorite cross server copy utility.
Just to clarify, which node is designated master now? It should be the one with the images.
You should have an imagereplication error log though, what does it say?
-
@Quazz said in Multicasting - Not The Master Node:
you could get performance benefits if you spread the load out.
AFAIK FOG 1.4.2 doesn’t have the ability to spread the load among all devices in a storage group. Its not totally clear to me that the max clients count will cause FOG to roll over to the next storage node in a storage group.
For example lets say you have for a master, max clients setup to 5 and a storage node in the same storage group has a max client of 5. Now lets say we decide to image the 6th client at the same time the other five are deploying on the master node. I’m speculating that 6th client will enter a hold wait state waiting for one of the 5 slots to become available, even if the storage node is sitting with zero clients.
We may need one of the @Developers to comment on this, but I don’t believe that FOG will load balance between the storage nodes in the same storage group. Right now IT admins will have to do that manually using the location plugin to allocate certain computers to use certain storage nodes.
-
Last Summer when I was using 1.3.5 (I believe), it would roll over to the next node when you hit the Node’s max client count
@Quazz
I do have the designated master node set to the storage node with the images located on it.The Replicator log on the FOG Server says the same thing the mulitcast log said. “Not The Master Node…”
I’ll transfer them over this weekend. FTP work just as well as any?
Thanks!!
-
@Joe-Gill said in Multicasting - Not The Master Node:
I’ll transfer them over this weekend. FTP work just as well as any?
Actually ftp is what FOG uses to replicate images between master and storage nodes. So you will be right to use FTP.
-
@george1421
SMH… This couldn’t be without issue. So I tried the FTP transfer over the weekend and quickly discovered that my server was very low on space.This morning I added more space to the VM… I added the space in the form of a new Virtual Disk. I partitioned that space…
I have SDB1 (my old /images directory on the FOG server) and SDC1 (newly partitioned space. Both virtual disks are the same file system type (83)).
Next I installed mhddfs utility… I unmounted my current mounting point. Removed that entry from fstab… Then I mounted both disks to the /images directory.
My size is now incorrect on the FOG node display for the server. I don’t get any size showing up there…
Am I missing something here?
The good news is that my images did start to migrate over once I freed up some space… So… That works.
-
@Joe-Gill Sorry I can’t remember this, but did I create the /dev/sdb1 disk and mount it to your fog server? (if so I created it a certain way that could be expanded)
If not, if you did it did you create it with a single partition (not lvm)? If so you can expand the virtual disk in your hypervisor and then extend your partition and finally the filesystem without needing to add another disk and leap frog the disks into place.
-
@george1421 I believe I did this last Summer once already. The partition is ext4. I’ll be looking at this again in the morning. Thanks!
-
@Joe-Gill its more to the point on how the partitions are created. But you said they were type 83 linux so that tells me what I needed to know. You can expand the disk / partition / file system. The only sketchy part is deleting the partition and recreating it. As always make sure you have great backups before going this route.
ref: https://thewiringcloset.wordpress.com/2013/01/09/extending-a-root-filesystem-in-linux-without-lvm/
The other way is to just create a new (larger) virtual disk, migrate all of your images to that new virtual disk. Then remount the larger disk over /images, update fstab and be done.
-
@george1421 I’m getting closer! Rebuilding my images directory now on the Server from the Node. Then I’ll make the switch to set the Server to master node.
Then I’ll try MultiCasting and update this post!
Thanks everyone for the support!
-
@george1421
The image transfer is complete!I had to touch the .mntcheck file to the /images and /images/dev directory. But other than that, I’m up and rolling again! Woo Whooo!
I can’t wait to start fresh with Debian and clean up my server a bit. That’ll be next month’s project!!
Thanks everyone who’s contributed!! I appreciate all the advice!
Cheers,
Joe Gill
Townsend K-12 Schools