Unicast is slow or freezes, Multicast is fine.
I am having a rather weird issue. I have a problem where my clients either freeze while imaging or the transfer speed becomes very slow at around 80%. The displayed speed on the screen is still usually around 2GiB/min, however this is not the actual speed as the transfer has slowed significantly. I will be imaging a computer that starts out saying it will take 8 min to image and once it gets to about 2 min remaining it will take about 14 minutes to complete. This means 20 minute imaging times if it doesn’t freeze during that time.
For those computers that do freeze, strange as it may sound, they will work if I put them in a group by themselves and send a multicast to the single computer. The only reason I could think that this works is that it is using different network protocols/methods to send the image.
When I first setup the server (August 2012) I was able to consistently image computers (Windows 7/20-25GB) in 4-5 minutes. I was wondering whether there is any general maintenance I should be performing on the server?
There are roughly 2000 computers registered in the database, I have it backed up and was about ready to do a rebuild of the server but I wanted to exhaust some other options first.
Anybody have a clue what is going on? Need me to post some logs? Hope someone can help!
Dan, that is correct, when a unicast host requests the image it is sent to the machine and then decompressed at the level.
For multicast, the image is decompressed and sent in “sections” to the hosts. Each machine will wait for the others to catch up with the current “section” before they will move on to the next step. This causes a high load on your server, but it helps when you are imaging a large amount of computers that are the same model.
Some new info on this, we found out that it doesn’t seem to have anything to do with the server itself.
When the deployment slows down and seemingly grinds to a halt on the client, the server has finished all disk and network activities. Monitoring the disk read rate and the network transfer rate from the terminal confirms this.
The only thing I could think of was that on a single image deployment the image is decompressed after it is transferred on the client itself. Does this sound right? Any way to confirm?
I have no idea how fog works (apart from i LOVE it) but if i had 2c to offer:
Has any network hardware changed since a year ago? ie: switches replaced with managed switches? hardware firewalls added etc?
You know, stuff that monitor network traffic and can cap/alter it?
and it would be ironic if all the older workstations were having hdd issues leading up to failure all at the same time
Anybody have any other ideas regarding this? Bump
I haven’t done system/package updates in about a month, normally it is an every two month or so thing.
We do have two new models of laptops (HP ProBook 6565b & 6570b) that have come along since a year ago but strangely enough those are the two that have the least issues. We used to just switch the kernel (or define it in host info) when we had issues and usually we could find a kernel that would work with problematic models, this however seems different. When it would be a kernel issue we would either see the “snow screen” or just really slow transfer speeds but they would be consistent. This new issue seems to be random aside from the fact that there are a few models that it happens less of the time on.
I have rebooted the server a couple of times in the past few weeks.
There is not really any maintenance to perform on the FOG specific portions of the server. You should however be doing operating system and package updates as needed.
Are you still imaging the same computers today that you were a year ago? I’m thinking if you are trying to image newer machines, you may need a kernel update for the clients.
And as always, have you tried rebooting the server to see if that clears anything up?