Master vs Storage nodes and highly-availably/cluster fog installation



  • Hello everyone,

    Have been testing fog and it is a fantastic project. I am now at a client that wants to use fog however they want to have fog as a higly available installation/set-up and so I have two questions but firslty lets assume the following architecture:

    • 3 geographically diferent zones (ZoneA, ZoneB, ZoneC)
    • 1 fog master node in ZoneA
    • 1 fog storage node in ZonaB
    • 1 fog storage node in ZonaC
    • Each zone is in its own VLAN and can have specific DHCP settings
    1. When deploying (downloading) a new image to a client how is the storage node selected? What will make a client use the master or one of the two other storage nodes?

    2. Is there any way in which I can define a specific storage node for a specific group of machines? Is the solution having each storage node also with a TFTP server as per wiki guide http://fogproject.org/wiki/index.php/Multiple_TFTP_servers ? Problem here is guide refers to /tftpboot/pxelinux.cfg/ which doesn’t exist in version 1.2.0, does any part of TFTP need to be shared amongst nodes?

    3. By having master and storage nodes the only real redundancy is the image storage meaning web interface and mysql database are still single point of failures, so can the mysql database be placed on a diferent server then using Cluster Suite for example to create and active/passive cluster with service failover of apache and mysl? Or any other solution or method to guarantee availability of service?

    Guess that is all.

    Looking forward to your replies, best regards,


  • Moderator

    @Istvan Cebrian, post: 45682, member: 29376 said:

    Makes sense the groups would have to be diferent due to image replication, so I guess a real active/active for load-balacing would not be possible and so active/passive with the database on a third server is the only way to go (only HA but no LB).

    So assuming we have set-up an active/passive cluster with HTTPd service fail-over and a 1 virutal IP, the question is now if one could install fog server using the virtual IP provided by the cluster on one node (node1) pointing to the external DB then do the installation again on the second node (also using the virtual IP) pointing to the same database. We could then fail-over the MySQLd, HTTPd and Fog services from one node to the other as nececssary aind in case one node goes down.

    What company is wanting to do this???

    This is simply out-there in FOG terms.



  • Makes sense the groups would have to be diferent due to image replication, so I guess a real active/active for load-balacing would not be possible and so active/passive with the database on a third server is the only way to go (only HA but no LB).

    So assuming we have set-up an active/passive cluster with HTTPd service fail-over and a 1 virutal IP, the question is now if one could install fog server using the virtual IP provided by the cluster on one node (node1) pointing to the external DB then do the installation again on the second node (also using the virtual IP) pointing to the same database. We could then fail-over the MySQLd, HTTPd and Fog services from one node to the other as nececssary aind in case one node goes down.


  • Moderator

    I use Hyper-V with replication.

    If our FOG goes down due to hardware/network failure on one server, it fires up immediately on another server.


  • Moderator

    @Istvan Cebrian, post: 45675, member: 29376 said:

    Well I can go ahead and test a few scenários. ne last question:

    Could I potentially have two “Master Nodes” each pointing to the same DB? Do you see any reason as to why this would not work?

    I am thinking as a first test having:

    • 1 MySQL DB Server with the Fog DB
    • 2 Fog Master Nodes using the same MySQL DB Server

    Hi,

    Due to a high availability strategy in our campus we planned a test for that installation on June !

    Regards,
    Ch3i.


  • Senior Developer

    2 FOG Master nodes is possible, but they have to be in separate groups. There’s always only one Master node per group.



  • Well I can go ahead and test a few scenários. ne last question:

    Could I potentially have two “Master Nodes” each pointing to the same DB? Do you see any reason as to why this would not work?

    I am thinking as a first test having:

    • 1 MySQL DB Server with the Fog DB
    • 2 Fog Master Nodes using the same MySQL DB Server

  • Senior Developer

    Technically it’s possible to make a cluster/replication MySQL server, though I don’t know if the current FOG system will auto fail back to that. I could look into it though. Essentially, right now, if your normal sql server were to fail the only thing you’d have to do to switch to the new one is to change the /var/www/fog/lib/fog/Config.class.php DATABASE_HOST line to point at the secondary system. There may be a simpler way though I’ve not had the time or opportunity to test.



  • Hello Tom,

    Thanks very much for your fast reply and sorry for my slow reply. As per your recomendation I have tested the location plugin and it is exactly what I was looking for, thanks!

    In regards to my 3rd question, in that MySQL and Webinterface are single point of failures on the master node, is there any recomended approach in which I could gurantee some sort of redundant system? Can I for example place the mysql BD exernally in some sort of replicated environment? Could I have a second machine with the web-interface or may-be failover the services (httpd and mysqld) to another server?

    Thanks,


  • Senior Developer

    Image uploads are always set to go to the “master” nodes, so uploads may not work right.

    However, FOG has a plugin that does more or less what you’re trying to do. This plugin is called the “location” plugin.

    You can set your nodes to be specific to the location. You then specify the hosts you need to the location you want them to download from.


Log in to reply
 

491
Online

38916
Users

10685
Topics

101362
Posts

Looks like your connection to FOG Project was lost, please wait while we try to reconnect.