How big can Snapins be now?
-
Hi,
ever thougth about a local sync feature for snapins?
This could safe a lot of traffic in big environments.Look at dropbox they use a local sync feature for the client synchronisation.
This needs to be implemented and i am sure it’s a lot of work but could be a solution for this problem.
Fog clients have to talk to each other and they need to know which snapin should be deployed to whom.Regards X23
-
Wow this thread has made a big loop. (I admit I didn’t read the entire thread so this may have been already solved)
How I see it there is 2 (maybe 4) options.
- Open up the php settings (size and timeout) to allow bigger snapins to be uploaded to the FOG server, with all the negative impacts.
- Deploy a snapin that calls the installer from a common share. (my preference)
- Deploy s snapin that spawns out a script to copy the install files local to the target computer (such as c:\windows\temp) and then launches the installer from there. Then cleans up (deletes) the install files afterwards.
- Use a third party tool like PDQ Deploy to deploy the applications using one of the three above methods. The advantage of PDQ Deploy is that you can use a manual list or an AD OU as a selection source to deploy applications. Actually you could call a PDQ Deploy package from a snapin.
-
@x23piracy I like that idea a lot. Sort of P2P with snap ins?
-
@THEMCV i don’t know howto deal with that exactly but the devs will hate me as they have always a bunch of work to do. So don’t expect this feature it requires a lot of work.
Regards X23
-
@x23piracy The problem with this concept (outside of making very unhappy devs) is that now to share snapins among the clients, the content of the snapins will need to be persistent on the clients (forever). The way the snapins work today is that the payload of the snapin is downloaded to the target computer, executed then deleted. For the p2p snapins to work the payload must remain on the target computer until they are removed from service.
This p2p proposal is how windows updates in win10 work by default.
-
I’m still a little bewildered by the idea that 2+GB of data is large to be honest.
With 10/40Gbit core switches/server connectivity and gigabit to the desktop, 2GB of data is tiny, even if deploying to hundreds of machines. 2GB over a 1Gbit connection would take 17 seconds to copy (obviously, theoretical, as reality will change that dramatically depending on disk speeds, network congestion etc…).
Whichever way you do the install, that data is still going to go across the network - be it by running from a share or by copying and running it locally on the machine.
-
@Tony-Ayre I think the keypoint is to try and avoid hammering the FOG server with a snapin download over so many clients, when it might be needed for other things.
If even just one client is downloading a snapin that takes a while, that means imaging will slow to a crawl, this is often undesirable or even unacceptable, especially since there’s no real oversight on how long it might take.
Sure, regular network traffic could also cause this, but if your network is as good as you say it is, then the bottleneck will be the FOG server itself (most likely the hard drive/SSD) which can only throw around so much bandwidth before it caps out.
Snapins downloading slowly (as a ton of them would be happening at the same time most likely) is no problem if nothing else needs to be done, I suppose, but that’s a rather specific use case, imo.
-
@Tony-Ayre The issue is not specifically FOG related but more php related. You can push the settings in php to allow bigger files. But there are some practice limits that you can’t overcome.
http://php.net/manual/en/features.file-upload.common-pitfalls.phpI can suggest that you change the settings larger and see what happens. You will need to restart the apache server after making php.ini adjustments.
-
@george1421 The original post was more to do with the fact that Apache, back when I last used Fog, simply couldn’t take an upload larger than 2GB, regardless of settings. This was even with 64bit at the time.
-
When you run an installer on a network share, SOME of the data is transferred across the network, but the data is much more readily usable and will be finished much faster. This is because it’s not doing a bit for bit copy and can “hold” as you iterate over the menus and what not.
In the case of snapins, you STILL have to download the file in its entirety to EVERY system, then the system can execute the file.
Data, at least at some level, is being transferred over the network, but how it’s being used is the difference.
In the case of executing a file from a share, it’s not having to copy the entire file, it can do what it would do had the file already existed on your system (albeit a little bit more delayed in the initial load up).
However you want to perform a snap in is up to you. I’m just giving ideas.
But what do I know?
-
@Quazz said in How big can Snapins be now?:
I think the keypoint is to try and avoid hammering the FOG server with a snapin download over so many clients, when it might be needed for other things.
Exactly. There can be situations where the FOG Server is so busy that it doesn’t have time or capacity to acknowledge an image deployment getting completed. I imagine the same thing could happen with snapins if the server just didn’t have capacity to respond. When this happens with image deployments (depending on boot order) the machine will reboot and image again or go into a reboot loop because network isn’t the first boot item but the fog client sees that there is an imaging task waiting and reboots the system. There’s also the issue with loosing all login history that may be coming to the FOG Server, because the server is just too busy to acknowledge it. The biggest issue I had personally was with just network booting in general when the server was slammed. If we imaged 3+ at a time due to a network bottleneck on our FOG Server, other systems just couldn’t network boot. Why? Because the FOG Server couldn’t respond, the request would just timeout. This is a problem when your entire building of 500+ physical desktops are all set to network boot first.
What was the answer to all of these issues? Just set the max clients down to 2. Solved. This was also the answer at all of the other sites in my org too where a large distributed FOG System was built and everyone was imaging all at once.
-
@Tony-Ayre just wanted to acknowledge that this is definitely a limitation of FOG backend that we were handed when @Tom-Elliott picked up this project. Our current rewrite addresses this problem (and most scaling problems), but it’ll be a quite a bit before FOG 2.0 is ready.