FOG Client over the Internet?

  • Random question, and not sure of the answer. Can the FOG Client be used to deploy snapins over the public internet, if I give the fog server a Public IP and forward some ports?

    Since I’m not sure what tools are used for the client to download and sync snapins, I’m worried about the security. But if it’s HTTPS or the like, I could see it working well.

    Note, all AD join and the like would still happen on the local lan, this would be for management after imaging. Almost like an MDM

  • Moderator

    @wayne-workman Yeah, I was trying to address an issue I was seeing the wrong way. It would be much cleaner to just do a split horizon dns to address the issue.

  • @bob-henderson said in FOG Client over the Internet?:

    @george1421 Not worth it. Security through obscurity is a joke and just a portscan away.

    SQL Slammer targeted only the default MS SQL port. Those that used non-default ports didn’t get hit.

    You should absolutely have real security in place, but using non-standard ports also helps avoid getting hit when there’s a real vulnerability. Security through obscurity is a valid piece of overall security.

  • @george1421 It’s currently not possible - unless one wants to modify the source and maintain their own build. Here are all installation options for the client:

  • @george1421 Not worth it. Security through obscurity is a joke and just a portscan away.

  • Moderator

    @george1421 This may be a question for the @Developers Is there a way to change the port used by the clients to connect to the fog server. Lets say we would want to use a non-standard port for client to fog server communications. Is that possible?

  • @george1421
    There already is a fog server at each site, a storage node, with the HQ server being the master for inventory management purposes. That part is working great.

    The issue is the amount of clients that leave the site into the great unknown. The users are remote in many cases, and come to their local office or HQ maybe once a month, at best.

    I’m looking to manage application versions and such remotely, similiar to a MDM. This isn’t what FOG was built for, but I’m hoping to use the pre-existing tool the techs are used to to do it. The idea is to have the clients report into the HQ fog machine via the client, see any new snapins assigned to them, grab and go from there.

    A real MDM like Filewave or Intune would be great to do this, except for the cost in this case. We use PDQ on site for everything, and it works great, so we’re now looking for outside of the buildings.

  • Moderator

    @bob-henderson With that many hosts at the remote locations why not add a local fog server for snapin management? That would avoid having each client reach out to the master fog server at HQ for tasks?

    Oh and one thing i missed, was how many locations are you talking about?

    Is this for some type of MSP business model?

  • @george1421
    All good points. 300 some devices per location, with some going up to 700ish.

    Now knowing that the system is talking HTTP, my plan is to handle it just like any other web server, and only expose the needed ports via the DMZ firewall. The majority of our snapins will simply be scripts contained via the snapin framework which instructs the clients to download the software directly from their location, so I’m not overly worried about slow server to client communication via wan.

    We’re not going to be using a VPN or some other solution for this. Theres numerous reasons for it, but in this case we’re setting up a proof of concept to allow cloud hosting, where the VPN becomes more of an issue.

    When I have a chance to set this up, I’ll report back. I think it’ll work, if I can get it all worked out.

  • Moderator

    @bob-henderson How may systems at each remote location(s) are you talking about?
    I’m not sure I would want to expose the FOG server directly to the internet without some kind of access controls. Possibly using a reverse proxy with a screening router to help.

    These are just some incoherent questions (ramblings) I have:

    1. I think we need to consider what protocols are involved here. The client check into the fog server. That is done via http(s). Is http also used to transfer the snapin to the remote system?
    2. What happens if the client is connected via a slow Internet connection? Are there timeouts involved here?
    3. Is there a way that a reverse proxy running at the remote sites (nginx + raspberry Pi + openvpn) could be utilized to provide a secure channel back to HQ?
    4. Depending on the client density at the remote sites, would/could a FOG-Pi or FOG-NUC server running as a storage node be practical for snapin deployments? (this would also give you a footprint for Chocolaty or FusionInventory at the remote locations)
    5. How would you manage IP address range eclipses/duplications?

  • @tom-elliott

    Fantastic. I’m not seeing any issues with it being able to push Snapins over the public internet then, as long as the snapins are built to not depend on any internal sources. Am I missing something?

    This opens up a tons of new things for us, by the way. Combining Snapins with Chocolatey/OneGet means a great alternative for our sites that can’t do SCCM for reasons.

  • It communicates, by default, through http. Though you can configure it to communicate via HTTPS if you so need.

  • @wayne-workman Alright, that’s well and good, but doesn’t answer the question in the slightest. Wondering, again, what protocol is used to communicate from server to client, and since it’s encrypted, if having it communicate over public internet is a feasible solution.

    I’m hoping to be able to use a FOG server in the DMZ to deploy Snapins to remote clients that do not have consistent connection to the local lan, nor is VPN an option in this case, nor Direct Access.

  • @bob-henderson All fog client communications are encrypted already - trying to use https would be unnecessary overhead. See the wiki article on it for further detail: