How big can Snapins be now?



  • Its been a number of years since I used FOG, and back then there was a limit on the size of Snapins (2GB). Has this changed now?

    I’m using SCCM for OS deployment at the moment, and it is great when it works, but it has a habit of breaking and then I have to spend hours trawling through logs to fix! The only reason I stopped using FOG was the 2GB Snaping problem. Would love to start using it again.


  • Senior Developer

    @Tony-Ayre just wanted to acknowledge that this is definitely a limitation of FOG backend that we were handed when @Tom-Elliott picked up this project. Our current rewrite addresses this problem (and most scaling problems), but it’ll be a quite a bit before FOG 2.0 is ready.


  • Moderator

    @Quazz said in How big can Snapins be now?:

    I think the keypoint is to try and avoid hammering the FOG server with a snapin download over so many clients, when it might be needed for other things.

    Exactly. There can be situations where the FOG Server is so busy that it doesn’t have time or capacity to acknowledge an image deployment getting completed. I imagine the same thing could happen with snapins if the server just didn’t have capacity to respond. When this happens with image deployments (depending on boot order) the machine will reboot and image again or go into a reboot loop because network isn’t the first boot item but the fog client sees that there is an imaging task waiting and reboots the system. There’s also the issue with loosing all login history that may be coming to the FOG Server, because the server is just too busy to acknowledge it. The biggest issue I had personally was with just network booting in general when the server was slammed. If we imaged 3+ at a time due to a network bottleneck on our FOG Server, other systems just couldn’t network boot. Why? Because the FOG Server couldn’t respond, the request would just timeout. This is a problem when your entire building of 500+ physical desktops are all set to network boot first.

    What was the answer to all of these issues? Just set the max clients down to 2. Solved. This was also the answer at all of the other sites in my org too where a large distributed FOG System was built and everyone was imaging all at once.


  • Senior Developer

    When you run an installer on a network share, SOME of the data is transferred across the network, but the data is much more readily usable and will be finished much faster. This is because it’s not doing a bit for bit copy and can “hold” as you iterate over the menus and what not.

    In the case of snapins, you STILL have to download the file in its entirety to EVERY system, then the system can execute the file.

    Data, at least at some level, is being transferred over the network, but how it’s being used is the difference.

    In the case of executing a file from a share, it’s not having to copy the entire file, it can do what it would do had the file already existed on your system (albeit a little bit more delayed in the initial load up).

    However you want to perform a snap in is up to you. I’m just giving ideas.

    But what do I know?



  • @george1421 The original post was more to do with the fact that Apache, back when I last used Fog, simply couldn’t take an upload larger than 2GB, regardless of settings. This was even with 64bit at the time.


  • Moderator

    @Tony-Ayre The issue is not specifically FOG related but more php related. You can push the settings in php to allow bigger files. But there are some practice limits that you can’t overcome.
    http://php.net/manual/en/features.file-upload.common-pitfalls.php

    I can suggest that you change the settings larger and see what happens. You will need to restart the apache server after making php.ini adjustments.


  • Moderator

    @Tony-Ayre I think the keypoint is to try and avoid hammering the FOG server with a snapin download over so many clients, when it might be needed for other things.

    If even just one client is downloading a snapin that takes a while, that means imaging will slow to a crawl, this is often undesirable or even unacceptable, especially since there’s no real oversight on how long it might take.

    Sure, regular network traffic could also cause this, but if your network is as good as you say it is, then the bottleneck will be the FOG server itself (most likely the hard drive/SSD) which can only throw around so much bandwidth before it caps out.

    Snapins downloading slowly (as a ton of them would be happening at the same time most likely) is no problem if nothing else needs to be done, I suppose, but that’s a rather specific use case, imo.



  • I’m still a little bewildered by the idea that 2+GB of data is large to be honest.

    With 10/40Gbit core switches/server connectivity and gigabit to the desktop, 2GB of data is tiny, even if deploying to hundreds of machines. 2GB over a 1Gbit connection would take 17 seconds to copy (obviously, theoretical, as reality will change that dramatically depending on disk speeds, network congestion etc…).

    Whichever way you do the install, that data is still going to go across the network - be it by running from a share or by copying and running it locally on the machine.


  • Moderator

    @x23piracy The problem with this concept (outside of making very unhappy devs) is that now to share snapins among the clients, the content of the snapins will need to be persistent on the clients (forever). The way the snapins work today is that the payload of the snapin is downloaded to the target computer, executed then deleted. For the p2p snapins to work the payload must remain on the target computer until they are removed from service.

    This p2p proposal is how windows updates in win10 work by default.



  • @THEMCV i don’t know howto deal with that exactly but the devs will hate me as they have always a bunch of work to do. So don’t expect this feature it requires a lot of work.

    Regards X23



  • @x23piracy I like that idea a lot. Sort of P2P with snap ins?


  • Moderator

    Wow this thread has made a big loop. (I admit I didn’t read the entire thread so this may have been already solved)

    How I see it there is 2 (maybe 4) options.

    1. Open up the php settings (size and timeout) to allow bigger snapins to be uploaded to the FOG server, with all the negative impacts.
    2. Deploy a snapin that calls the installer from a common share. (my preference)
    3. Deploy s snapin that spawns out a script to copy the install files local to the target computer (such as c:\windows\temp) and then launches the installer from there. Then cleans up (deletes) the install files afterwards.
    4. Use a third party tool like PDQ Deploy to deploy the applications using one of the three above methods. The advantage of PDQ Deploy is that you can use a manual list or an AD OU as a selection source to deploy applications. Actually you could call a PDQ Deploy package from a snapin.


  • Hi,

    ever thougth about a local sync feature for snapins?
    This could safe a lot of traffic in big environments.

    Look at dropbox they use a local sync feature for the client synchronisation.

    This needs to be implemented and i am sure it’s a lot of work but could be a solution for this problem.
    Fog clients have to talk to each other and they need to know which snapin should be deployed to whom.

    Regards X23


  • Senior Developer

    @Tony-Ayre Thinking a bit more, if licensing is the reason for “needing” snap-ins, might I suggest another approach? While snap-ins can do some pretty amazing things, their implementation and intentions were not meant, as far as I understand them, to be software installation items. Granted it can do these things, but it does require a lot more thought.

    I wonder if it would be “simpler” to keep track of which devices/systems should NOT have a specific software/license and use a snap in to remove that item from those systems? If know it would mean a lot more management (registering hosts you can somewhat simplify this by adding a host to a group), but it would also achieve the same level of results you’re requiring without having to transfer large amounts of data. The level of work (ultimately) would be greatly lessened as you would only need to write a snap-in script to uninstall the software once. While it would still have to be downloaded X number of times (X being the number of hosts that need the software uninstalled from), it’s much less taxing on IO for the server and bandwidth on the network.

    Any approach you take should be fine in either case. Just giving thoughts on how I might think to do things.



  • @fry_p Licensing depends on the software. However, all the education site licenses I’ve come across in the last 10 years have said the same thing - it applies to a single “site”. So, in our trust that’d be a single school.


  • Moderator

    @Tony-Ayre said in How big can Snapins be now?:

    @Wayne-Workman Why is that? 2+GB files aren’t really that big.

    If you’re deploying just 1, then it isn’t big. Deploying snapins to 200 computers, that’s 400GB. Deploy that to 550 computers, that’s over 1TB, I’ve done bigger than this. And the best way to do it is to limit max clients.


  • Senior Developer

    Like I said, I understand why you might want to do things this way, I’m just saying how I managed it. With that still, if the snapins are “huge” (2gb is still a lot of data going a cross a network) I don’t see making images that contain what’s needed as “unnecessary” work. Using snapins will do what you want, but the method you take to handle the installation is still very important.

    Here’s my thoughts on why:
    Let’s say you have a lab of 30 hosts. All of those 30 hosts need the same snapin. This snapin is small, relatively speaking, at just 2GB.

    Using snapins means:
    2GB Snap-in needs to be downloaded on EVERY host. (30 x 2 = 60 GB total data transferred.) After this you still need to configure it (granted this might happen during the installation.) Error handling, however, you have 30 chances for a problem to happen during the installation.

    Using images means:
    Snap-in is already installed and configured. You already know it’s going to work. The image is stored in a compressed form so data transfer is less. Is it really that much “more” time?

    I hope you understand, I fully get what you’re saying. I disagree, however, that creating separate images is performing “unnecessary” work. You already have the general image, so use that as the “basis” and simply install the program(s) you need and upload.


  • Testers

    @Tony-Ayre I thought site licenses are typically unlimited while volume licensing is the limited type?

    Listen, I have 4 elementary schools, 2 middle schools, a high school, a community center, an administrative building, etc. all containing different software. I have at most 15 commonly used images that take up under 500GB of space. I use snapins for anything that fits between the size threshold. Anything bigger I make an image for. Do what you see fit, but in my opinion, snapins are not meant for large pieces of software. If you don’t agree, perhaps your current solution fits your needs better.



  • @Tom-Elliott Licensing in schools is done by site license usually, meaning each school has its own key for things. There are then limits on how many installs of various packages can be performed. So, yes, licensing is a major reason why we can’t do this.

    We also don’t really want to put everything on every computer that would be absurd, especially when we have plenty of computers with 128GB SSDs.

    All of this is somewhat beside the point though - we want to do things in a certain way, so need snapins to be able to handle it. :)


  • Testers

    @Tony-Ayre said in How big can Snapins be now?:

    Lets put it this way. I am now running IT for 6 schools and 3 nurseries. Each of those schools has a dozen PC types, with about 3 different roles for each. This number is likely to grow as more schools join our trust also.

    So, to create a full image type for each machine with each role would be rather a lot of unnecessary work.

    Instead, a single general image (containing the base level of software), combined with Snapins will reduce that work tremendously, along with the storage needs of images.

    I know not everyone’s environment is the same. I figured I’d throw it out there just in case. After some thought, I also assume that you use an .msi for deployment of Smart Notebook, so the web based installer is probably out. As much as I figure you’d like to avoid it, .bat files may be your only bet. What are your network shares like?

    I want to add one more thing: The way we make different images is to create a base or “student” image. We upload that and then base our other images off of it. For example, the “teacher” image is made by deploying the already uploaded student image to a pc, then simply installing the Smart Notebook software, and boom, re-upload as the teacher image. It’s a little time consuming at first, but don’t think you must build from scratch, though the non-standard models may make it a little more complex. If you don’t have sufficient storage, I guess that won’t work. I use 500GB of space for my images. Just a thought.

    Like @Tom-Elliott said, if you have a Smart Notebook site license, throwing it on a general image won’t hurt. It could get messy, but you may be able to control icons with GPO’s.


Log in to reply
 

Looks like your connection to FOG Project was lost, please wait while we try to reconnect.