Upload times: does the server or client matter more?
-
I see upload times of an hour and up at max compression but since I typically just update these images once or twice a year and deployment is much faster I don’t really see this as an issue unless you are constantly updating images.
-
We constantly “update images” because we are using FOG to push out a bunch of updates through imaging. For many software updates in student computer labs (where user data is never retained), re-imaging is the easiest way to do it. In theory, we have other tools that will get the job done, but (a) large software updates cannot be easily multicasted to all the PCs, so it takes forever to do, (b) when I use an image, I [B]know[/B] all software installs are 100% successful, and if the software requires custom configuration, that’s much easier to do in an image than manually figuring out all the registry keys that need to be set.
This would obviously never fly for employee machines, but then again, we don’t get asked to make so many specific and large changes to numerous employee machines at one time.
We are beginning to use FOG less as a traditional imaging server and more like a software and configuration distribution point. Ideally, we’d like it where someone can request a large piece of software be installed on 50 computers, we install it on the image, and we push out the image – all within a span of about 45 minutes. Forty-five minutes, for us, would be an ideal turn-around time from request to finish. Right now, the upload speed is normally the missing piece to this puzzle.
-
I agree that your best bet to speed up your uploads is a more beefy laptop.
If it’s that critical, go all the way, put in the best processor money can buy, solid state drive obviously too.
-
why not use a VM to create your master image, and use the laptop to remote into it? then you get the performance of your server with raid instead of the laptop for uploads?
-
Or deploy your software and updates via snapins.
-
The snapins often don’t work well for this. If we were going to deploy the software separately, we already have better mechanisms (like KACE) in place to handle that.
But for the reasons I stated above (e.g., guaranteed 100% installations for all machines every time, faster deployment even when considering the entire reimage because imaging usually only takes about 8 minutes for a lab of 40 machines, easier to allow people to customize the software settings, less bandwidth required, etc.), we’d prefer to use FOG. FOG is actually really good for this use.
Junkhacker, I hadn’t even thought about that. That’s probably going to be our best bet, actually. Years ago, I tried uploading to FOG using a VM, and the speed was horrendous (about 3 hours to upload). I guess a beefy CPU and modern VM would do better these days, though?
-
[quote=“loosus456, post: 43963, member: 26317”]
Junkhacker, I hadn’t even thought about that. That’s probably going to be our best bet, actually. Years ago, I tried uploading to FOG using a VM, and the speed was horrendous (about 3 hours to upload). I guess a beefy CPU and modern VM would do better these days, though?[/quote]i have a VM that runs updates on itself and uploads weekly as a scheduled task. looks like it averages 35 minutes according to the reports.
-
[quote=“Junkhacker, post: 43981, member: 21583”]i have a VM that runs updates on itself and uploads weekly as a scheduled task. looks like it averages 35 minutes according to the reports.[/quote]
Now that is really, really awesome! I never even thought of it!
[quote=“loosus456, post: 43963, member: 26317”] I guess a beefy CPU and modern VM would do better these days, though?[/quote]
FOG detects the number of cores when uploading, and uses them all… You could allocate 8 or 12 cores for that VM. IMAGINE the upload speeds! Win7 64 bit supports something like 256 cores, two physical sockets, so building the image wouldn’t be an issue.
-
Just came across this thread… When FOG uploads an image- if you are doing an upload of the FULL hard drive, all used and unused space must be read by the FOG client. When deploying an image, unused space is recognized in the image file and is much smaller than the size of the full hard drive. The compressed image is written to the FOG server very quickly.
When uploading a full image that has high compression %, the read speed of the originating hard drive will be the critical choke point. Get the hard drive with the fastest sequential read.
You could try the Toshiba MK3001GRRB. It is a 15k RPM SAS drive with sequential transfer speeds of 198.82MB/s read and 196.77MB/s write. If this is installed on a desktop, you will need a SAS controller.
-
Thinking more on this topic, a VM would be faster because the VM software already knows where the empty space is -thus skipping the physical reads, and the physical reads from disk would be fewer because the used space is compressed.
-
@JasonW said:
all used and unused space must be read by the FOG client.
Depends on the image type. Resizeable images resize the partitions first, so that only used space is captured.
When deploying an image, unused space is recognized in the image file and is much smaller than the size of the full hard drive. The compressed image is written to the FOG server very quickly.
Again, it depends on the image type. But generally image deployment is a lot faster than image capture.
Here’s a great article on the subject: https://wiki.fogproject.org/wiki/index.php/Image_Compression_Tests