Change number of machine imaging limit



  • Hello,

    Is there a limit on how many machines you can image at once via FOG? I’ve deployed my image to a group, it’s only letting me image around 10 machines out of 32, the message that appears after the first 10 is:

    ‘No open slots… there are Number before me…’.

    Please can someone advise me on what to do?

    Thanks,

    Dan


  • Developer

    @Tom-Elliott that’s what i meant to convey. the default is 10, and that works fine for most in initial setups and test environments. it’s up to the person running the server to monitor and adjust settings to fit their environments optimal configuration.


  • Senior Developer

    Every environment is different.

    Just because it happens to one person, does not mean everybody will have the same issue or sets of issues.

    Sometimes commonalities can be found, sometimes they cannot. Suggestions are just that suggestions.

    We cannot force people to do things one way or the other. Suggest, sure, getting upset or trying to prove upon one’s own thoughts and experiences won’t get anywhere. I’m not perfect, and I know sometimes this is a hard pill to swallow.

    Just understand, what one person experiences does not mean every other person must follow that one person’s environmental guidance.


  • Developer

    @Wayne-Workman if you have that many systems checking in, sure. but most people don’t. we have about 1000 checking in, and we do 10 at a time (all clients nic boot first,1 storage node, 1 GB nic) and never have any issues with timeouts or any of the rest of what you describe.


  • Moderator

    @Bob-Henderson said in Change number of machine imaging limit:

    @Wayne-Workman I find that a bit overzealous. My fog server itself is a crappy *buntu Vm with 2g of ram and 2 cores, hooked into our Vswitch of 8 ports, and my clients are all i5 8g of ram laptops. I routinely image 10 at a time in unicast, 40+ in multicast. When I have all 10 imaging, yes, the imaging is around 50% slower than just doing 1, but the time savings are still huge.

    Do you use the new fog client? Do you have 500 clients that are all configured to network boot first? Do you have 4,000 systems with the FOG Client installed contacting the fog server regularly? Do you have 12 remote storage nodes that all need to talk to the main fog server constantly? Do you have technicians at 15 different locations all needing to use the web interface at once to image & deploy snapins in the GB sizes? At my old job, if the FOG Server was at capacity, network booting didn’t work, web interface didn’t work, login history didn’t get processed… It’s not overzealous at all - I’ve seen this issue over and over, at home, at other organizations, at my old work. Numerous people in the past have reported this in the forums too. I’m not just making stuff up.

    Never is it acceptable for one technician to start imaging tasks that prevents another technician from queuing tasks because the web interface isn’t responsive. Never is it acceptable for the fog server NICs to be under so much load that they can’t process domain joins or snapin deployments or network booting or process login history.



  • Yes, that’s correct. Thank you so much!!

    Dan



  • @Wayne-Workman I find that a bit overzealous. My fog server itself is a crappy *buntu Vm with 2g of ram and 2 cores, hooked into our Vswitch of 8 ports, and my clients are all i5 8g of ram laptops. I routinely image 10 at a time in unicast, 40+ in multicast. When I have all 10 imaging, yes, the imaging is around 50% slower than just doing 1, but the time savings are still huge.


  • Moderator

    @dloudon96 If your fog server has only one network interface and that network interface is 1Gbps and your using FOG 1.3.x with the default compression setting -and you’re imaging systems that have a 1Gbps network connection and 8GB+ of RAM and any generation of Core i5 or better - I would only recommend a Max Clients setting of 2 and no more than 3.


  • Senior Developer

    But you tell us where the problem is and state this is something fog is doing, when it clearly isn’t.

    I guess, more directly, can you provide us with a picture of what is happening?



  • Hello,

    I’ve tried that, they fail at the same point, I know that the image is 100% working, as whenever I do two, they both work. They’re all in the same location, it’s one classroom, all within the same switch too.

    Please help :(

    Thanks,

    Dan


  • Developer

    @dloudon96 Are you talking about the windows set up screen? “starting services” That doesn’t sound like a FOG thing :/

    Try narrowing down the number of machines you are imaging at a time. If you only do 10 at a time, do they succeed?

    Are all the computers in the same location?



  • Sorry, meant to put, that I’ve changed the Max clients to 90.

    Thanks,

    Dan


  • Senior Developer

    @dloudon96 I’m having a hard time understanding the last statement. Particularly seeing as “it works one at a time, but not when multiples are imaged.” It, at this point, would no longer be a FOG issue in the least.



  • Hello,

    It is, however, I’ve uploaded my image to the FOG server which has been sysprepped, and I’m downloading the image to 20 machines, and it keeps failing at the starting services part after it’s downloaded the image, It works if I do it one at a time, but this isn’t really what I’m wanting, I know that the sys prep and everthing works as it works if I download the image one at a time. but then keeps failing after any more than two.

    Please help, I’ve got 3 days to get X90 machines re-imaged, as the students come back on Monday…

    Thanks,

    Dan


  • Developer

    @dloudon96 Did you change the setting on the fog server Storage Management page for the nodes?

    0_1487077427850_fogstorage.png

    then this causes the graph on the front page to display a higher number

    0_1487077493361_fogstorage2.png

    Is this the answer to the question you were asking?


Log in to reply
 

Looks like your connection to FOG Project was lost, please wait while we try to reconnect.