At the risk of extending this feature request even more…
Please understand I’m not trying to be difficult, I truly want to understand if what I want to do is possible. I think we have a communication misalignment. I’m not doing a very good job explaining the situation because I keep seeing the same results (maybe that is the only answer, I don’t know).
But I’m assuming from your context that in my drawing below there is one full deployment server in that network with the rest storage nodes. Is that a correct assumption?
I understand the function of the location plugin, It allows you to assign storage groups and storage devices to a location and then you link a hosts to a location so it knows where to get and put (if necessary) an image to. I get that. I’ve been using FOG for quite a while.
The issue(s) I’m seeing here are this:
- The storage nodes are not a fully functional deployment server. They are missing the tftpboot directory. While they do have the pxe boot kernel and file system, they alone can not provide pxe booting services for a remote site.
- The storage nodes do not appear to have a sql server instance running so I assume they are reaching out to the master node’s database for each transaction. Historically I’ve seen this being an issue with other products as they try to reach across WAN links for transactional data.
- There is no local web interface on the storage nodes. So all deployment techs from every site must interface with the HQ Master node. This shouldn’t be an issue since the web interface is very lite as apposed to some other flash or silverlight base management consoles.
- While this is not a technical issue, its more of a people issue. Since you will have techs from every site interfaces with a single management node its possible for one tech to mistakenly deploy (i.e. mess up) hosts at another site since there is no built in (location awareness) in regards to their user accounts.
- On the deployed hosts, where does the fog service connect to? Is it the local storage node or the Master node?
- Storage nodes can only replicate with the master node. i.e. if there are two storage notes at a remote site, one storage node can not get its image files from the other storage node at that site. All images must be pulled across the WAN for each storage node.
- Multicasting is only functional from the Master node. So in the diagram below only the HQ could use multicasting to build its clients. (edit: added based on a current unrelated thread)
The fog system is very versatile and you guys have put a LOT of effort into it since the 0.3x days. And you should be acknowledged for your efforts. Understand I’m not knocking the system that has been created or your time spent on the project.
I worked through this post, I can see that having a single master node with the rest storage nodes would work if:
- The /tftpboot directory was included in the replication files from the master node and the tftp service setup in xinet. (actually this could be built in as part of a storage node deployment by default, by having the service and tftpboot folder setup, even if it isn’t used in every deployment. There is no down side IMO)
- The user profile was location aware to keep them from making changes to hosts in other locations. The location awareness must have the ability to assign users who have global access for administration purposes.
- The storage nodes would have to be aware of latency issues with slow WAN links. And/or not break completely with momentary WAN outages.