Location capture fail

  • Hello,
    I have 3 sites with a fog server in each. I’ve enabled the location plugin, and everything seems to be working, but with at least one issue so far. When I capture an image with one of the non-master fog servers, the image gets captured to the master fog server. (over mpls) As I’m watching the progress bar, I noticed that the host & image I created on the branch site are not found on storage group’s master. Am I only allowed to capture hosts’ images at the master node’s site?

  • Senior Developer

    @apollyonus I don’t know what version of fog you’re running so without that I can’t give you great information about what you’re asking.

    In trunk, images can be replicated across groups. This is managed by what I call the ‘Primary’ master. All this is is the group that is the “owner” or “golden” image that is known to be good at the time of completion. This group’s Master node will replicate the image to all nodes within the group as well as all other storage groups that are associated with the image. If you update the master on a different group, it will be overwritten because the Primary master is the one who’s the “real dude”. So if you wanted to use a different group’s already replicated image as the “master” of all, you would need to update the primary group status of that image.

    This same principle works for snapins in trunk as well. Mind you, the group->group replication and snapin replication did not exist before what’s currently in trunk. 1.2.0 and earlier did not have these capabilities.

    Most often configurations of users with the location plugin, that i’ve seen, create groups for the different node locations. These nodes will fall under the group. Why is this important you may ask? Because the way the location plugin was designed with 1.x.x versions of fog, you can associate a host to a location, and the location does NOT need to belong to a single node. This can allow versatility in that if you have a large building, all on the same network, you can have a location managed by many nodes even under the same group. 0.32 (known as the location patch) only allowed node->host imaging to occur. It had no replication capabilities, all it did is tell the host where to download it’s image. Uploads were still done to whatever was the master node.

    The current location plugin is MUCH more powerful than the days when it was known as the location patch. First, it allows the Group to be a “location” that a host (when assigned) can use similar to how the storage group/node system works without the location plugin. It just designates the host must operate under as specific group. Second, under trunk I guess, you don’t have to have different image names that are essentially the same image anyway because (maybe this isn’t specifically to locations but it certainly expands a locations power) they can cross different groups. Third, the location plugin can tell a host to use the location it’s under to get it’s init’s and kernels (even further limiting across WAN bandwidth usage).

    I don’t know if I’ve answered your questions, but yes, most people create a group with a single node that designate the location. That single node is the master of that group, so images needing to be uploaded are handled at that location as well.

  • @Tom-Elliott If I understand you correctly, I could setup storage groups for each location I want to capture images in, and make each server master / non-master depending on which site that node happens to be in? If I do that, can I use /images for each storage group, or would I need to do something more complicated like /images.a for storage group a, /images.b for storage group b, etc?

  • Senior Developer

    @apollyonus correct. Master nodes are the masters of the group which basically means they are what tell the subordinate nodes what they have. Most people setup groups based on the location the devices are at and associate a master node as the location node of that group. You can still manage everything via a single server too.

Log in to reply

Looks like your connection to FOG Project was lost, please wait while we try to reconnect.