One StorageNode for multiple Masters



  • Hi,

    the question title says it all - we want to use a single storage node for multiple masters.
    I did not find any specific information regarding such a setup, just some indication that it may work [0].
    Some masters would just do deploys, while a specific one would do captures also.

    Some background:
    The FOG master instance is installed on Ubuntu Server on a VirtualBox (on Win7). We tried to just mount an external folder via vboxsf, which did not work presumably due to FOG NFS usage (we’re not sure on this one).

    Might it be possible that we somehow make FOG use FTP/HTTP/whatever instead of NFS during image deploy/capture?

    I hope this does not sound too basic/uninformed; I’ve read some bits here and there, and am generally unsure on how to proceed.

    Regards,
    Stefan Hanke

    [0] https://wiki.fogproject.org/wiki/index.php?title=Multiple_TFTP_servers



  • @stefan-hanke There’s some classes in France, plus a book written in French. Not helpful for most of us. There currently isn’t any ‘fogproject blessed’ training.

    As far as documentation goes for distributed setups, we have these two articles:
    https://wiki.fogproject.org/wiki/index.php?title=Replication
    https://wiki.fogproject.org/wiki/index.php?title=Location_Plugin

    I’ve been wanting to create some more videos on distributed fog setup, tried to do it a few months ago but ran into some problem.



  • @wayne-workman said in One StorageNode for multiple Masters:

    Also - the way you think you want to setup FOG I believe is due to your lack of knowledge of how FOG is supposed to work in a distributed setup.

    Yeah, so true :-( Currently, we’ve backed off of a distributed setup.

    Do you know about any FOG training? I’ve found dawan in the wiki, but it looks like this is french only and thus not possible (well, at least for me). It’ll be helpful to know should we decide in future that we actually want a distributed setup.



  • George is right. Rather than trying to do something very uncommon with FOG, and trying to finagle it to work - you should setup FOG in the way it’s already designed to be setup & proven to work. Doing something very uncommon means the community is less able to help you - and less able to help the guy that comes in behind you in the future. Also - the way you think you want to setup FOG I believe is due to your lack of knowledge of how FOG is supposed to work in a distributed setup.


  • Moderator

    @stefan-hanke I still don’t understand your logic here mixed with how I know FOG works.

    Each FOG Master node will have its own database. It will not know about other master nodes or images stored on the shared storage node (storage node). The FOG server (normal mode) is a supervisory computer that is responsible for managing the deployment process. It is responsible for creating the iPXE boot menu and sending the FOS system (FOG’s customized linux OS to the target computer that captures and deploys images). The FOS engine does all of the work of imaging. The FOG computer on “watches” what happens. If you have a central storage node that would be accessible to all areas of your network and not have a single FOG Master node available is a bit confusing.

    Now with that said, it is possible to do what you want. You just have to remember / keep in mind that the target computer must access at least one FOG Master node and the common storage node or everything will fall down.

    How you will set this up is to configure one FOG Master node, then setup your central storage node. This storage node can be a fog server in storage node mode, a standard linux server, or a NAS like synology or qnap. On your first FOG Master node, you will have a storage group with your FOG Master Node and your storage node. Change the roles so that the storage node is the master node and the FOG Server is a storage node. Then set the max clients on the FOG Server to zero. This will tell FOS to capture and deploy using the storage node only. Confirm that this setup works like you need. Then add your next FOG Master node, use the same storage node as you have setup before. Again change the FOG Master node into a storage node and promote storage node to the master role in the storage group on the second FOG server. Hopefully you will see that each new fog server will be added as a storage node and your central storage node will be master. Since your central storage node will not be a full fog server, no replication can happen since FOG replication is a push and there is no service to push the images.

    Understand this is not a supported configuration for FOG, but should work. I have not setup this type of environment so I can only guess that it should work.



  • George, Wayne, these are helpful comments. Thanks!

    Actually I’m kind of surprised that you think the solution is special at all. You have one storage node that multiple masters can use for data storage purposes. Actually, we do not want to replicate the data at all, we just want one storage location where each master can deploy images to its clients. I do hope we don’t talk past another! That is why we thought to attach a network filesystem to the FOG masters and be done - unfortunately it does not work due to NFS usage.

    For the use case: We want to capture images from nodes in one part of the network and deploy them to nodes of another part of the network. The nodes in the network partitions are sufficiently equal (hopefully).We cannot use one FOG master since parts of the network might be inaccessible, however there will be a machine that can access the relevant network partition and the storage node - this machine then hosts another FOG master…

    I think I’m going to spend time experimenting with some setups just to get used to the terminology and how things work out.

    Thanks again,
    Stefan Hanke


  • Moderator

    I think I might take a different approach here. I think what you are wanting since the FOG server is running in virtual box and you don’t have a lot of storage space, you want to utilize a storage node to house your data. This can be done without a lot of messing around. In this setup the FOG server is used for supervisory purposes with the storage node (maybe a NAS device) doing all of the heavy lifting. This can be done. But as Wayne said, we need to understand the logic behind your request.



  • @stefan-hanke said in One StorageNode for multiple Masters:

    we want to use a single storage node for multiple masters.

    You can do this. A storage node can be a member of multiple storage groups. There’s a lot that can go wrong here, but a lot that can go right also. I don’t know where your knowledge level is with this so here are some resources that I would like for you to read through:
    https://wiki.fogproject.org/wiki/index.php?title=Replication
    https://wiki.fogproject.org/wiki/index.php?title=Location_Plugin

    You would create a storage group for each master node - because you can only have one master node per storage group. Then simply join the non-master to each of the groups as a non-master and it should work fine - what that means is multiple storage node definitions for the same storage node - each with a different storage node name and different storage group settings but everything else being identical - just for clarification.

    I must ask though, what’s the use case? Perhaps if you explain why your trying to do this, we might come up with a better solution to suggest - because in all of my FOG experience I can’t imagine why you would want to do this.


 

344
Online

41.2k
Users

11.6k
Topics

110.7k
Posts

Looks like your connection to FOG Project was lost, please wait while we try to reconnect.