How to make deployment quicker with Windows 7 and fog
-
Scenario: HP DC6000 160 GB HDD with 2 partitions (System -2GB, OS - 100GB with about 24GBs used for OS, Office 2010, Printers, Microsoft Updates) with roughly 45GBs of unused blank space left. I used FOG to capture the image using [Multiple Partition Image - Single Disk (Non-Resizeable) due to my hosts all being same or bigger in HDD space. This process to deploy to another system takes about 45 mins and yes I have Gigabit from Fog Server to Switch to Host. Is there any way I can make this a quicker process, imagine doing this for 10 workstations.
-
45 minutes is actually pretty fast, especially for multiple systems. You’re sure the host you’re imaging has gigabit lan and the switch is gigabit?
The largest image I have at 40 Gigabytes, takes ~10 to 20 minutes to deploy. Gets around 4.7 and 5.2 GB in Minutes.
This equates to around 1 Gigabit per second. Or 128 MegaBytes per second.
-
well I tried to muticast to 8 workstations and the time estimated was 9 hours, the speed was at 65 MB min for 8 in a multicast…and I let it finish and yes it was about 9 hours. I really want to believe in FOG and use it to deploy my Windows 7 in-place upgrades so im trying my best to make this work but its the image time that is killing me because of my huge HDDs i guess, I cant be the only one that has HDDs over 100gb with Windows 7 at this point in 2013
-
It’s not the HDD’s that’s the issue. It’s not, from the sounds of it, the Image size by them selves, as you already stated the images are only 45GB EDIT(I MISREAD THE LAST POST).
I’d start by trying unicast over multicast. Let your client systems perform the decompression tasks rather than force your server to perform the work, while still sending the data to each system.
-
Well, even with rereading that, what are you placing on your images?
NFS can’t handle, as is described in other posts, seemingly data over 50GB images. You’re running with almost 115GB images?
In 2013, yes, the HDD’s are larger, but the image sizes should not be that large. I’m assuming you’re placing and leaving the install files on the systems you’re creating the image with? That’s quite large. Why not use Network File shares hold the install data you need. Install what you need, clean it up.
The images I work with Have all the same type of data you’re talking about and then some huge heavy hitters. I am looking into what I can do to make our image sizes smaller and our’s aren’t anywhere near the size you’re saying it is.
All of my images are Windows 7 64bit, Microsoft Office Suite, Chrome, Firefox, Adobe Flash, Shockwave, Reader, Nero, and plenty more programs, all said and done, they’re image sizes are, MAX, 40 GB. I might have one that is sitting at 41.7GB in size which is huge for me.
I’d look into what you can do to make your image sizes smaller before worrying about imaging. There has to be much better means of getting your images down to more appropriate sizes. Your base image shouldn’t be that large, IMHO.
-
well this is what ive been thinking about this past hour, after all programs are installed that takes about 35GB out of the 100gb leaving roughly 60GB free, so I think im going to recreate the base image with 60GBs total which will then be used about 35Gb of data and roughly 50% free after that for microsoft updates or what else that may be needed, this should decrease my deploy from a 100GB drive to a 60Gb thats 40Gb taken off…
-
[url]http://www.fogproject.org/forum/threads/image-deployment-slows-down.267/#post-19396[/url]
Take a look through this post, it has some things that may help you out.
-
ya it definately sounds pretty close to my issue, fyi im going to reimage upload/download the huge hdd as a for sure test with speeds , then repartition it from 100gb to 60gb total and see where i sit with transfer times, also sysprep vs fogprep…should i run both or one over the other?
-
Fogprep is no longer required. Some still use it but it’s not a necessity. Sysprep is a MUST with Windows 7 if you plan to activate against a KMS server. KMS requires each machine to have unique identifiers and sysprep cleans the system so each machine can do so.
I can not verify if the same is true for other activation methods.
-
ya sysprep has been burned in my head as the very last thing you do to seal a box for image deployment since the days of Windows 2000, and no KMS here just VL on Windows 7 Pro/Enterprise…I will gave my speed transfer updates soon.
-
While Jaymes is correct that you shouldn’t need FOGPrep, in the post provided Dark000 used fogprep and saw results different, though I’m not sure if he did fogprep and saw difference, or he did fogprep with the cleanup of his image.
-
ok i shrunk down the whole partition to 60GB total with 35gbs being used as OS with software installs, and I also did clean out all areas you mentions TR, which was actually kbs and mbs in space to begin with so no real change in full size of Windows7 OS, plus Office 2k10, Adobe Reader, Flash, and Java. my speeds look like this still 1.82GiB/min which avg out to 20 mins to upload the 60GB partition with 35GB of space actually being copied up to the fog server/images directory and then its compressed down to 25Gb. So it doesnt matter what my inital full HDD size of 100Gb or 60gb is, because whats never changed is the 35Gb of OS information that needs to get uploaded. So maybe I wasnt running full 1Gb speeds on the day i was testing at a remote site…next time I will plug my mobile fog server directly into a port on the switch instead of using a network cable plugged into a port out on the floor…and maybe I will just buy a small 10-port 1Gb switch and plug the workstations into that switch along with my mobile fog server because at that point i know for sure all my ports from workstation to fog server to switch will all be 1Gb…unfortunately I dont have any info on when i was downloading the image for one pc when i was at the remote site, all I have is the mutlicast speeds which was 65MiB/min for 8 hosts that had 35GBs to deploy which took 9 hours…maybe some one can figure out which this info if i was running 100mb or 1Gb…because if im currently getting 1.82GiB and that takes 20 mins to upload 35gbs then for 8 workstations that would take 2 hours and 40 mins…not 9 hours!!!
-
What kind of disk setup do you have on your server? How much RAM/CPU ?
If you were going at wire speed, 35 GB should be deployed in 5 minutes…
Assuming 1 Gbps wire speed, you have to account for :-
NFS overhead (let’s say 5%)
-
Fog host disk (used to write): usually, standard SATA disks nowadays can get 100 MB/s… Older ones… get that down to 50.
-
Fog server disks… it needs to read the data, but if you have standard disks, it should be 100 MB/s at least (you can play with bonnie++ to have an idea, or smarctl -tT /dev/sda, to see what you get)
-
To test the server disk, you can use bonnie++ or smartcl (smartmontools)
-
To test the bandwidth, you can use iperf, see what you get on the link
-
To test if your server is up and connected at 1 Gbps, use mii-tool or ethtool
But counting 50 MB/s disks, on both side, and even the overhead, you get 500 Mbps, that’s still 10 minutes tops. Then there is the switch capacity… but for 1:1, 1 Gbps on each side, I doubt any Gbps switch will fail you.
Good luck
-
-
Well 35Gb downloaded as a deployment in 5 mins for one workstation? that sounds insane. I just tested my upload and download for one workstation using 1Gb nic on host and my fog server also using 1gb on a enterasys switch and my uploading of image took 20mins to copy 35gbs of space at 1.85GB/min , and to download the same file as a deploy it takes 11mins at 3.0Gib/min. Again this is not where I originally did a mutlticast of 8 hosts, so I will be taking a small 8 port 1 gb switch next time to make sure im at least getting 1gb speeds.
Fog Server:
HP DC8300, Intel Core i5 @ 290Ghz, 16Gb Ram, Intel 82579M Gigabit Network Card, WD 320Gb SataIII (HOST)
Ubuntu Desktop 12.04 VM 8Gb Ram 160GB HDD space, Virtual 1000 Mb/s Nic>Fog ServerMy Fog server is considered mobile to me so I have no SSD or Fibre or 10Gbe as a data transfer methods
Im going to be switching out the SATA HDD on the host pc used as FOG Server and use SSD for data trasnsfer from HDD and also having the 1Gb nics should help will test and report back
-
ok question here regarding one upload/download of a image. I have tested one image that is 35gbs of space used on a 60gb hdd and to upload this one image takes 15mins and compresses to 24gbs in the fog /images directory, to download or deploy the same image back to the workstation takes 10 mins flat. so based on the images’ times of upload/download if i had 8 identical workstations to deploy wouldnt that mean it would take 1 Hour and 20 mins to deploy one at a time back to back…what about all in a multicast, would that take longer or shorter then 1hour and 20 mins?
-
It should be shorter but I’ve never had any luck with multicast. If you set all jobs as unicast it should be about 15 to 20 minutes flat for all 10. They will all image at the same time even on unicast.
-
and unicast is kicking off a task for one deploy one after another so i have 8 different deploys? or task one deploy one image then once thats complete perform another?
-
There are many ways you can do this. Yes, you can task all jobs individually. Or, you can create a grouping of the host, and that group, when you apply the host, will create all the tasks individually, but from one location for you. Or you can do the multicast thing.
Your system management queue is telling you how many systems are imaging at the same time. Once that limit is hit, then systems after that number are in queue until a slot opens up.
This means, you can (Unicast) create a task for 10 systems (usually 10 is the default) even if they’re all different image ID’s. Turn all of those systems on at the same time and they’ll all starting imaging. Anything more than 10, (lets just say you have 15 systems) the extra 5 will wait in turn for there turn to image.
The reason, I think, this is faster is because the decompression is being done by the clients (the ones receiving the image) where in multicast, it’s all done on the server before getting to the client.
In review, in multicast deployment, the server is not only imaging all of the clients, hosting the database and the GUI, but it’s also performing the additional task of decompressing the image and sending it to the clients. In unicast, the clients are doing the majority of the leg work.
Lets try to put that in perspective. Multicast, ultimately should theoretically be faster because it only has to decompress the image once, but it also requires that all clients receiving the image are getting the data and placing it on the drive at the same time. They have to continuously sync themselves with what the server is doing (or vice versa) so everything is on the same page. It may initially start out faster because the server doesn’t have any issue starting off. However, as time drags on, the systems may not be all in sync, so the server has to wait until all systems are matched up.
Theoretically, unicast should be slower because each client creates a link to the server. It’s not slowing the server down though, but timewise it would be slightly slower because it’s up to the client’s to do the work. However, it doesn’t have any requirement to keep all in the same sync frame so, the client is free of waiting for other machines.
-
Sometimes, you can’t use multicast (cheap switches, mixed environment…). But you can make unicast faster, especially if the server’s disks are faster than the clients, and that it has shitloads of RAM. If you deploy a 16 GB image, and you have 16 GB of RAM, chances are that if the tasks starts simultaneously, they will hit the cache on the server, thus they will be only limited by their own disk speed. Which usually goes slower than 1 Gbps. So you can get 2-3 hosts to be deployed at once, and still be faster than 1:1
-
thanks for info you guys, thats one thing I didnt bother to try was doing a unicast of 8 seperate deploy tasks…I deployed one image which took 20 mins so i went full boar and tried 8 in a multicast and got stuck with 9 hours to deploy…so when i go back to site im going to have a 8-port 1Gb switch and plug the stations into that directly and try the unicast method to see how the times change.