How to make deployment quicker with Windows 7 and fog

  • Scenario: HP DC6000 160 GB HDD with 2 partitions (System -2GB, OS - 100GB with about 24GBs used for OS, Office 2010, Printers, Microsoft Updates) with roughly 45GBs of unused blank space left. I used FOG to capture the image using [Multiple Partition Image - Single Disk (Non-Resizeable) due to my hosts all being same or bigger in HDD space. This process to deploy to another system takes about 45 mins and yes I have Gigabit from Fog Server to Switch to Host. Is there any way I can make this a quicker process, imagine doing this for 10 workstations.

  • Banned

    This post is deleted!

  • Ok heres my update as to why I had terrible speeds trying to deploy a multicast to 8 workstations. It was because my mobile fog server was plugged into a 8-port 1Gb port switch…which eventually was uplinked to a VOIP phone (100mb) which in return the voip phone was the uplink to the rest of the network…so my fog deployments were severely bottleneck because my fog server was eventually squeezing thru 100mb pipe…by temporarily bypassing the voip uplink and plugging in the 8-port switch to the real network im happily able to say i successfully deployed 8 Windows 7 workstations thru muliticast at 590mb/min which took 52 minutes tops! then i had to image one 6200 and that took 6 mins at 3.39Gb/min.

  • Developer

    [quote=“chad-bisd, post: 19665, member: 18”]Faster server disk subsystem usually equals faster unicast deployment to multiple clients. I use an old Dell server that has 5 disks in RAID5 array. I can image 15 clients at about 3GB per minute per client, or I can image over 30 tablets that only have 100Mbps interfaces, and get about 900 MiB per minute per client.

    When I was using an old desktop with a single 7200 RPM SATA drive it in, imaging more than 2 clients at a time slowed everything down due to the non sequential disk read requests. RAID5 makes it better, as does a RAID controller with built in cache.[/quote]


    I have a FOG machine built out of some old hardware, I used a 7200 rpm disk in them. It images at a decent speed, I wouldn’t say it is slow, it is QUITE the improvement over our WDS choice.

    I got a hold of a e-Server IBM machine with RAID5 10K rpm disks. I installed my ubuntu of choice and fog 0.33b and this sucker FLIES compared to my 0.32 server. I’ve also tried installing the Fog 0.32 on this server and it seems it is the hardware that has made the major improvement.

    I had some trouble initially setting up the IBM eServer because of the raid control it uses, but once I removed the PCI card, things went swimmingly, not to divert too far from the subject here but I am still working to try to get the PCI card to work incase it has some kind of caching abilities. This use to be a Novell Netware server (which to my understanding is a Linux OS), so I am not sure what I need to do to get the sucker to work right but I’m not giving up yet!

  • Moderator

    Faster server disk subsystem usually equals faster unicast deployment to multiple clients. I use an old Dell server that has 5 disks in RAID5 array. I can image 15 clients at about 3GB per minute per client, or I can image over 30 tablets that only have 100Mbps interfaces, and get about 900 MiB per minute per client.

    When I was using an old desktop with a single 7200 RPM SATA drive it in, imaging more than 2 clients at a time slowed everything down due to the non sequential disk read requests. RAID5 makes it better, as does a RAID controller with built in cache.

  • thanks for info you guys, thats one thing I didnt bother to try was doing a unicast of 8 seperate deploy tasks…I deployed one image which took 20 mins so i went full boar and tried 8 in a multicast and got stuck with 9 hours to deploy…so when i go back to site im going to have a 8-port 1Gb switch and plug the stations into that directly and try the unicast method to see how the times change.

  • Sometimes, you can’t use multicast (cheap switches, mixed environment…). But you can make unicast faster, especially if the server’s disks are faster than the clients, and that it has shitloads of RAM. If you deploy a 16 GB image, and you have 16 GB of RAM, chances are that if the tasks starts simultaneously, they will hit the cache on the server, thus they will be only limited by their own disk speed. Which usually goes slower than 1 Gbps. So you can get 2-3 hosts to be deployed at once, and still be faster than 1:1 😉

  • There are many ways you can do this. Yes, you can task all jobs individually. Or, you can create a grouping of the host, and that group, when you apply the host, will create all the tasks individually, but from one location for you. Or you can do the multicast thing.

    Your system management queue is telling you how many systems are imaging at the same time. Once that limit is hit, then systems after that number are in queue until a slot opens up.

    This means, you can (Unicast) create a task for 10 systems (usually 10 is the default) even if they’re all different image ID’s. Turn all of those systems on at the same time and they’ll all starting imaging. Anything more than 10, (lets just say you have 15 systems) the extra 5 will wait in turn for there turn to image.

    The reason, I think, this is faster is because the decompression is being done by the clients (the ones receiving the image) where in multicast, it’s all done on the server before getting to the client.

    In review, in multicast deployment, the server is not only imaging all of the clients, hosting the database and the GUI, but it’s also performing the additional task of decompressing the image and sending it to the clients. In unicast, the clients are doing the majority of the leg work.

    Lets try to put that in perspective. Multicast, ultimately should theoretically be faster because it only has to decompress the image once, but it also requires that all clients receiving the image are getting the data and placing it on the drive at the same time. They have to continuously sync themselves with what the server is doing (or vice versa) so everything is on the same page. It may initially start out faster because the server doesn’t have any issue starting off. However, as time drags on, the systems may not be all in sync, so the server has to wait until all systems are matched up.

    Theoretically, unicast should be slower because each client creates a link to the server. It’s not slowing the server down though, but timewise it would be slightly slower because it’s up to the client’s to do the work. However, it doesn’t have any requirement to keep all in the same sync frame so, the client is free of waiting for other machines.

  • and unicast is kicking off a task for one deploy one after another so i have 8 different deploys? or task one deploy one image then once thats complete perform another?

  • It should be shorter but I’ve never had any luck with multicast. If you set all jobs as unicast it should be about 15 to 20 minutes flat for all 10. They will all image at the same time even on unicast.

  • ok question here regarding one upload/download of a image. I have tested one image that is 35gbs of space used on a 60gb hdd and to upload this one image takes 15mins and compresses to 24gbs in the fog /images directory, to download or deploy the same image back to the workstation takes 10 mins flat. so based on the images’ times of upload/download if i had 8 identical workstations to deploy wouldnt that mean it would take 1 Hour and 20 mins to deploy one at a time back to back…what about all in a multicast, would that take longer or shorter then 1hour and 20 mins?

  • Well 35Gb downloaded as a deployment in 5 mins for one workstation? that sounds insane. I just tested my upload and download for one workstation using 1Gb nic on host and my fog server also using 1gb on a enterasys switch and my uploading of image took 20mins to copy 35gbs of space at 1.85GB/min , and to download the same file as a deploy it takes 11mins at 3.0Gib/min. Again this is not where I originally did a mutlticast of 8 hosts, so I will be taking a small 8 port 1 gb switch next time to make sure im at least getting 1gb speeds.

    Fog Server:
    HP DC8300, Intel Core i5 @ 290Ghz, 16Gb Ram, Intel 82579M Gigabit Network Card, WD 320Gb SataIII (HOST)
    Ubuntu Desktop 12.04 VM 8Gb Ram 160GB HDD space, Virtual 1000 Mb/s Nic>Fog Server

    My Fog server is considered mobile to me so I have no SSD or Fibre or 10Gbe as a data transfer methods

    Im going to be switching out the SATA HDD on the host pc used as FOG Server and use SSD for data trasnsfer from HDD and also having the 1Gb nics should help will test and report back

  • What kind of disk setup do you have on your server? How much RAM/CPU ? 🙂
    If you were going at wire speed, 35 GB should be deployed in 5 minutes…
    Assuming 1 Gbps wire speed, you have to account for :

    • NFS overhead (let’s say 5%)

    • Fog host disk (used to write): usually, standard SATA disks nowadays can get 100 MB/s… Older ones… get that down to 50.

    • Fog server disks… it needs to read the data, but if you have standard disks, it should be 100 MB/s at least (you can play with bonnie++ to have an idea, or smarctl -tT /dev/sda, to see what you get)

    • To test the server disk, you can use bonnie++ or smartcl (smartmontools)

    • To test the bandwidth, you can use iperf, see what you get on the link

    • To test if your server is up and connected at 1 Gbps, use mii-tool or ethtool

    But counting 50 MB/s disks, on both side, and even the overhead, you get 500 Mbps, that’s still 10 minutes tops. Then there is the switch capacity… but for 1:1, 1 Gbps on each side, I doubt any Gbps switch will fail you.

    Good luck 😉

  • ok i shrunk down the whole partition to 60GB total with 35gbs being used as OS with software installs, and I also did clean out all areas you mentions TR, which was actually kbs and mbs in space to begin with so no real change in full size of Windows7 OS, plus Office 2k10, Adobe Reader, Flash, and Java. my speeds look like this still 1.82GiB/min which avg out to 20 mins to upload the 60GB partition with 35GB of space actually being copied up to the fog server/images directory and then its compressed down to 25Gb. So it doesnt matter what my inital full HDD size of 100Gb or 60gb is, because whats never changed is the 35Gb of OS information that needs to get uploaded. So maybe I wasnt running full 1Gb speeds on the day i was testing at a remote site…next time I will plug my mobile fog server directly into a port on the switch instead of using a network cable plugged into a port out on the floor…and maybe I will just buy a small 10-port 1Gb switch and plug the workstations into that switch along with my mobile fog server because at that point i know for sure all my ports from workstation to fog server to switch will all be 1Gb…unfortunately I dont have any info on when i was downloading the image for one pc when i was at the remote site, all I have is the mutlicast speeds which was 65MiB/min for 8 hosts that had 35GBs to deploy which took 9 hours…maybe some one can figure out which this info if i was running 100mb or 1Gb…because if im currently getting 1.82GiB and that takes 20 mins to upload 35gbs then for 8 workstations that would take 2 hours and 40 mins…not 9 hours!!!

  • While Jaymes is correct that you shouldn’t need FOGPrep, in the post provided Dark000 used fogprep and saw results different, though I’m not sure if he did fogprep and saw difference, or he did fogprep with the cleanup of his image.

  • ya sysprep has been burned in my head as the very last thing you do to seal a box for image deployment since the days of Windows 2000, and no KMS here just VL on Windows 7 Pro/Enterprise…I will gave my speed transfer updates soon.

  • Developer

    Fogprep is no longer required. Some still use it but it’s not a necessity. Sysprep is a MUST with Windows 7 if you plan to activate against a KMS server. KMS requires each machine to have unique identifiers and sysprep cleans the system so each machine can do so.

    I can not verify if the same is true for other activation methods.

  • ya it definately sounds pretty close to my issue, fyi im going to reimage upload/download the huge hdd as a for sure test with speeds , then repartition it from 100gb to 60gb total and see where i sit with transfer times, also sysprep vs fogprep…should i run both or one over the other?

  • [url][/url]

    Take a look through this post, it has some things that may help you out.

  • well this is what ive been thinking about this past hour, after all programs are installed that takes about 35GB out of the 100gb leaving roughly 60GB free, so I think im going to recreate the base image with 60GBs total which will then be used about 35Gb of data and roughly 50% free after that for microsoft updates or what else that may be needed, this should decrease my deploy from a 100GB drive to a 60Gb thats 40Gb taken off…