Multicast from second storage group



  • I have 2 fog systems:
    FOG13 = main + default storage, only member of default
    FOGStorage02 = storage only, group #2, only member of #2

    Multicast works fine when using an image stored on FOG13, but fails with

    [04-20-16 7:13:40 pm]  Task (3) Multi-Cast Task failed to execute, image file:/images/W10Audit_05 not found!
    

    when attempting to start a multicast for an image on group#2

    Is this not supported for images not on the default storage/group/main server?


  • Moderator

    @Mentaloid said in Multicast from second storage group:

    I have already written a bit of a bodge script that checks multicast tasks on my main FOG server, and starts the UDPCAST process on the storage server if the image is found locally (storage group and image = local)

    Now you have to share the script.



  • @george1421

    Same dhcp server, same scope. The VPN is hardware tunnel, so any computers @ both sites act as if they are in the same physical network/scope.


  • Moderator

    @Mentaloid said in Multicast from second storage group:

    @george1421
    Thanks George - I had contemplated that, but I’d have to then have different pxe boot structures.

    While I appears you have a path, your above statement is interesting (??) I’m not sure I’m following this. Each site has their own dhcp server right? Or they share a single dhcp server, but they have their own scopes. Unless I’m missing something there is no booting differences between a master/slave setup for booting than a master/master setup.



  • @george1421
    Thanks George - I had contemplated that, but I’d have to then have different pxe boot structures.

    I have already written a bit of a bodge script that checks multicast tasks on my main FOG server, and starts the UDPCAST process on the storage server if the image is found locally (storage group and image = local). This seems to have worked for my purposes.

    Thanks everyone for their input!


  • Moderator

    If you can deal with my long winded answer I think we have a setup that my company uses that will help.

    We have a location that is connected to HQ via a site to site vpn. The remote site only has a 5Mb/s internet connection. Because of access rights issues I don’t want techs at site A to mistakenly deploy images to systems at site B. With a standard fog master/slave setup I can’t restrict FOG in the way I need it. So what we setup is what I’m calling a master/master setup with FOG. So at each site there is a master node with its own storage group. At HQ there is a second master node in a development storage group. The images are tested and validated on the development environment FOG server. (for this discussion there are 3 FOG servers involved. 2 at HQ and 1 at site B). On the development FOG server I created a storage group and defined the deployment fog server at HQ and the fog server at site B slaves. Understand all fog servers in this setup are master nodes, I just defined the two deployment servers as slaves to the development FOG server.

    So with this setup the images will be replicated from the development FOG server to both deployment FOG servers. This works great to get the images from the development FOG server to the deployment FOG servers, nothing tricky (other than defining the deployment FOG servers as slave nodes to the development FOG server) is needed to get the images.

    The last bit is to export the image definitions from the development FOG server and import them (manually) into both deployment FOG servers. In my case we update the images once a quarter, but never change their name. If we changed the name we would have to go through the manual process of exporting the image configurations and then importing them into both deployment servers. Not a big task if we needed to. Having an automated way to do this would be great. Maybe once fog 1.3.0 is released it will be worth the effort since right now in the trunk the code changes frequently.


  • Senior Developer

    @Mentaloid I guess I’m not understanding.

    Multicast will only work on “Master Nodes”. If FOG13 doesn’t have the image, but FOGStorage02 does, and they’re both the “same” group, I’d recommend making a different group and have each server the master of their respective group. FOGStorage02 would then be able to multicast images within it’s own network and FOG13 would be able to do the same.

    Of course, the image MUST exist on the nodes you want to multicast from.



  • @Wayne-Workman
    Yep - I had noticed. 170 gigs worth of images @ full speed of the VPN connection doesn’t outpace sneakernet in this case though!

    wow… I haven’t referenced sneakernet in way to long a time :)


  • Moderator

    @Mentaloid Don’t know if you’ve noticed or not, but you can limit replication bandwidth inside Storage Management.



  • @Tom-Elliott

    The image is only stored on the second server - hence the separate storage group. The reason for this is the middle is a VPN - FOG13 sits on one side, and acts as a storage server for systems on that side. The other storage server is on the other side of the VPN. Replication between the 2 should not happen as they are 2 different storage groups, and is undesired as the VPN is well… too slow :)

    The trick is, I’d like to be able to multicast from either storage group, so I can essentially pick which server to use.

    It won’t kill me, and I’ll probably bodge a script or plugin eventually. Unicast works fairly well in the mean time.


  • Senior Developer

    @Mentaloid what you’re looking for IS capable. The reason it failed is likely because node 13 didn’t replicate to the second server.



  • Wow… that seems like a lot of bother, and possibly recipe for an “oops”!

    It seems to me (OMG, I am so no volunteering to figure out how to do this out… yet), that there should be a way to have a storage server check in with the main server to see if it should be creating multicast host processes, and the main server to point the clients to the correct storage server.

    Future Feature request? :):)


  • Moderator

    There are ways around it - but they are cumbersome to support and are non-standard.

    You’d basically just do a full-server installation on the storage node that it isn’t working on (as opposed to just a storage node installation), and after installation, you’d edit the /opt/fog/.fogsettings file to point to the main fog server’s DB. You’d use it’s IP, the fogstorage username, and the fogstorage password. Those can be found in FOG Configuration -> FOG Settings -> FOG Storage Nodes

    After doing that, re-run the installer. You still need this server to be in it’s own group, and a master node of it’s group. You may associate an image with many groups, just remember replication always goes from the original group to the others, and not the other way around.

    Finally, I’d also disable the FOGImageReplicator permanently on the newly made full-server, because the real main server will do the replicating for you. Also - the new server will not present the new FTP credentials on-screen like it does in storage-node mode, you’ll have to get them from the username/password fields inside of /opt/fog/.fogsettings and plug those into the node’s storage management area.

    What this setup does is creates two web interfaces. Basically, two full-blown fog servers, but sharing the same DB in order to keep everything straight and consistent.

    I used to run this setup at work and it was working for us - but we decided to stop because it posed unique difficulties with the location plugin and with updates - which are minor issues really but they do cause issues when you need a non-linux-expert to be able to seek help on their own, and that help not be expecting the setup you have. Basically, if I were to leave my work, I wanted them to be able to come to these forums and get help from people that expect a standard setup.

    And wow, does that last line say “hire me” or what? :-)


Log in to reply
 

483
Online

39.4k
Users

11.1k
Topics

105.5k
Posts

Looks like your connection to FOG Project was lost, please wait while we try to reconnect.