Image deployment slows down
-
Which version of FOG are you using?
-
0.32
I noticed that this happens when deployment reaches around 50% and than it starts to go down. Image is 100gb big so it could be that problem is the size and not 50% progress, but than again upload of the same images goes without a problem.Speed is 5.5gb/min at the beginning all the way up to around 50% and when it starts dropping down it drops 0.01 every second so by the end of deployment speed is around 1.17gb/min.
-
[quote=“Sebastian M., post: 1058, member: 229”]
The image is Windows 7, x64, ~ 121GB, Multi Partition - Single Disk
[/quote][quote=“MadX, post: 1092, member: 456”]We are deploying an image to about 30 PC’s @ around 1.4GB /min[/quote]
How big are your images for compare? (ha! sounds like a “how big are YOUR bicepts” thing lol)
[quote=“tractor, post: 1832, member: 190”]I am having the same problem. Did you find any solutions?
I have version .32 also.[/quote]How big are your images for compare?
[quote=“Dark000, post: 16727, member: 867”]Did anyone find some kind of solution for this. I have the same problem, deployment starts fine but than it slows down considerably. I am using Kernel - 3.8.8 Core and i have also tried different kernel. It made no difference.[/quote]
How big are your images for compare?
[quote=“Dark000, post: 18008, member: 867”]…Image is 100gb big so it could be that problem is the size and not 50% progress…[/quote]
Just wondering if there is a max size for this sort of thing…
Maybe with fog, one of fog’s services or configs, maybe the host OS (ubuntu normally?) or the host OS kernel version…see where Im going with this?I only wish i could be more helpful then just asking pointed questions…
-
Our images are 16GB Singlepart using Win7x64 with a full compliment of pre-installed software. I can image 7-10 computers at once in under 8 minutes without multicast. What in the world are you installing on those images to make them that big LMFAO
-
I’m not for sure, but I think drjam is spot on with file size limitations, but I don’t think it’s due to configuration issues, in and of themselves.
What gets me thinking is maybe its more an issue with NFS file size limitations more than OS or FOG issues. Just saying.
I think this because:
[LIST=1]
[]Upload seems to work perfectly, which makes sense in that the data, doesn’t exist on the server until after the image upload process starts, and even then, it’s just adding data, it doesn’t really have to parse out any type of information.
[]On upload, the image is placed in /tmp where pigz compresses and writes the data to the NFS, technically speaking, no data is on the server until after it’s been processed by pigz, so no information has to be resent to the client to process.
[]During the download process, the image is being decompressed on the client side (in unicast) through the nfs mounted share.
[]This means that the data is constantly being looked at, extracted and written to the Drive. NFS has to maintain a constant cache due to the constant stream and processing, and it sounds like the limitations are being met with such large images.
[/LIST]
Just my thoughts, I don’t really know the specifics of the limitations here. -
@Tribble, We have to have that kind of size, because we need all the programs there is no other way around it.
@Tom Elliott, I agree with you. Where could i look for those NFS settings. My /etc/exports looks like this:
[CODE]/images *(ro,sync,no_wdelay,insecure_locks,no_root_squash,insecure)
/images/dev *(rw,sync,no_wdelay,no_root_squash,insecure)
[/CODE]Fog is on Cenots 6.4
-
Ok, i just did test install of fresh fog 0.32 on ubuntu 10.04 with no extra settings, so i could eliminate any nfs, centos issues.
And the problem is still there. After it goes over 50gb in deployment it starts to drop. Instead of 20min with speed 5,5gb it drops to 1,3(something) and it takes 1h30min to complete. So the problem is with fog or maybe with the image i will try another image with similar size over 50gb, but i doubt that is going to make any difference.
Did anybody successfully (in speed) deploy image over 50gb. I mean win7 is around 30gb add MsVisualStudio both Basic and Express and that is little over 10gb, there is MsOffice and few others. So you are easily over 50gb. In my case almost 100gb there is no other way around it. Compressed size of the image is 38gb.
-
Windows 7, base install, cleaned up before upload is just around 20Gb. With all my software, my largest image sits at about 40 GB. This software includes nero 10 platinum full suite, office suite, cs-6, and some other school needed softwares that tend to be disk heavy. I just don’t fully understand how your images are sitting so large.
As for NFS Settings, unfortunately, I don’t think there’s a way to configure the size limitations as is seems they’re built into the binaries of NFS operations. I’d take a look at cleaning up the image a little.
One thing, I’ve found, is removing software updates folder when you’re completely done upgrading. To do this open command prompt as administrator then run:
[code]net stop wuauserv
rmdir /q/s c:\windows\softwaredistribution
del c:\windows\windowsupdate.log
[/code]Then maybe do a disk cleanup making sure to check all boxes in the dialogue box. Supposedly you can run in command line:
[code]cleanmgr /sagerun[/code]The above method cleans up all folders in the disk cleanup options.
Then I make sure to make a clean administrator account by going thru and removing all downloaded folders/files. Then from the command line I like to run:
[code]rmdir /q/s c:\users\administrator\appdata\local\temp[/code]After that I like to remove the prefetch and temp directories in the windows folder:
[code]rmdir /q/s c:\windows\temp[/code]This should clean up your system quite a bit, especially the updates folder.
-
SOLVED
I mean this is crazy. The problem was the image. I did some more tweaking not sure what fixed it but really only think that could fix it was the fogprep.exe that I run this time (I did not run it the first time since I’ve read somewhere that it is no longer necessary). Or maybe that I changed partition size in image for ( ) from 190gb to 140gb and ( ) from 100gb to 20gb (I expand this via unattend.xml after deployment)@Tom Elliott, thank you for the tips i managed to get it down to 90gb especially WindowsUpdateCleanup was huge. Just finished deployment in 19min. And that is all on production server not the test one from yesterday.
I don’t really understand how that caused the problem to fog while deploying but the important thing is that it works now.
-
@Dark000- I’m happy that I helped in the smallest of ways, and only hope this helps others.
All,
I don’t know that FOGPrep or Partition adjustments, alone, corrected this problem. One way, I guess, to test would be to recreate the image, without FOGPrep. I’m more than certain the partition adjustments didn’t do this as the imaging process doesn’t know anything about the partitions them selves. All Partimage does is copy the data to the file, it’s really the FOG Scripts that create the partition specific images. Because of this, I am pretty certain these were not a problem in the first place.
I’m almost sure FOGPrep wouldn’t have been the fix for this either. Mostly because, to my knowledge, all FOGPrep does is get the UUID information (hostname and hardware specifics) cleaned up from the registry. At that point, it really doesn’t care.
It’s starting to sound, more, like the Image being under a certain size has helped improve the speed of this issue.