Upload times: does the server or client matter more?
-
We are trying to improve upload speeds without reducing compression.
We currently use a laptop (Dell Latitutde E5520) with a i5-2540M CPU to produce all of our images, and the upload speed is [I]horrendous[/I]. It takes up to 85 minutes to upload a 38-gigabyte image, whereas downloading/multicasting to entire room of 30 computers usually takes about 10 minutes.
First, is the laptop or the FOG server the limiting factor when uploading? Which one, the server or client, does the actual compression?
Second, whether it’s the laptop or FOG server, what type of CPU would help most? For example, can FOG take advantage of multicore CPUs, or is single-thread performance more important than having more cores?
-
The client performs the compression. The reasoning for a long upload is two fold. More compression is better for downloads but worse for uploads. That said. 1.x.x allows you to change the compression factor right from the GUI. 0.32 had a hard coded compression of 3. 1.x.x defaults to the max of 9. Try lowering the number and your upload speed should increase. Find where the best compression and download speeds are maintained to help you out. That said how often are you uploading an image from the same system versus downloading? The compression does use multiples already.
-
We really don’t want to reduce the compression. As I said, we [B]really[/B] like the current download speeds. If there were a way to crank it up more, we would.
We use a laptop to create [B]all[/B] images, no matter where the image will ultimately go. We have actually gotten to the point where we use FOG to update labs even when someone asks for a tiny update because it’s getting faster and easier to just do that. In our deployments, we don’t have to touch the computers at all after they have been imaged, so when someone asks for a new piece of software, we’ll gladly just push out a new image.
The only real obstacle now is the upload time.
We’d prefer to continue using a laptop because we sometimes need instructors/librarians/employees to make changes to images without our direct supervision, and it would be a pain to lug a computer to his/her office for that purpose and setup a keyboard, mouse, and monitor.
We are in the market for a new laptop for image creation. I just would like to know what type of CPU I should be looking at. Is single-threaded/single-core performance more desirable, or can FOG take advantage of multiple slower cores?
-
FOG uses PIGZ ([url]http://zlib.net/pigz/[/url]) with command line parameter ‘-p’ when compressing the upload image. Multicore should be “supported”…
-
Okay, we are now looking into a new laptop. We really want to have our cake and eat it, too.
Does anyone have a laptop that they use to upload images to FOG? If so, what are your CPU specs and is the upload time decent with compression set to its highest rating?
On a side note:
When we combine how fast FOG is these days with all the automation features that we’ve added (automatic naming of the machines, adding printers based on physical location of the computer, etc.), FOG is making our VDI implementation go away. The benefits that VDI brought are slowly being eroded by FOG.
Another big feature that is making VDI less necessary: Dell has implemented PXE-on-Wake in their recent (2013 and later) BIOSes. This means that if we need to image a computer lab with 40 computers, [B]we don’t even need to get each computer boot to FOG[/B]. Simply waking them up from FOG makes each computer automagically get to FOG, pull down the assigned image, restart, go through OOBE, and run any startup scripts.
We are now really streamlining the way we manage machines. Between Dell’s tools and FOG, things really are turning the corner for the better. I don’t think we’ve been this optimistic in ten years about the management of our machines.
-
I see upload times of an hour and up at max compression but since I typically just update these images once or twice a year and deployment is much faster I don’t really see this as an issue unless you are constantly updating images.
-
We constantly “update images” because we are using FOG to push out a bunch of updates through imaging. For many software updates in student computer labs (where user data is never retained), re-imaging is the easiest way to do it. In theory, we have other tools that will get the job done, but (a) large software updates cannot be easily multicasted to all the PCs, so it takes forever to do, (b) when I use an image, I [B]know[/B] all software installs are 100% successful, and if the software requires custom configuration, that’s much easier to do in an image than manually figuring out all the registry keys that need to be set.
This would obviously never fly for employee machines, but then again, we don’t get asked to make so many specific and large changes to numerous employee machines at one time.
We are beginning to use FOG less as a traditional imaging server and more like a software and configuration distribution point. Ideally, we’d like it where someone can request a large piece of software be installed on 50 computers, we install it on the image, and we push out the image – all within a span of about 45 minutes. Forty-five minutes, for us, would be an ideal turn-around time from request to finish. Right now, the upload speed is normally the missing piece to this puzzle.
-
I agree that your best bet to speed up your uploads is a more beefy laptop.
If it’s that critical, go all the way, put in the best processor money can buy, solid state drive obviously too.
-
why not use a VM to create your master image, and use the laptop to remote into it? then you get the performance of your server with raid instead of the laptop for uploads?
-
Or deploy your software and updates via snapins.
-
The snapins often don’t work well for this. If we were going to deploy the software separately, we already have better mechanisms (like KACE) in place to handle that.
But for the reasons I stated above (e.g., guaranteed 100% installations for all machines every time, faster deployment even when considering the entire reimage because imaging usually only takes about 8 minutes for a lab of 40 machines, easier to allow people to customize the software settings, less bandwidth required, etc.), we’d prefer to use FOG. FOG is actually really good for this use.
Junkhacker, I hadn’t even thought about that. That’s probably going to be our best bet, actually. Years ago, I tried uploading to FOG using a VM, and the speed was horrendous (about 3 hours to upload). I guess a beefy CPU and modern VM would do better these days, though?
-
[quote=“loosus456, post: 43963, member: 26317”]
Junkhacker, I hadn’t even thought about that. That’s probably going to be our best bet, actually. Years ago, I tried uploading to FOG using a VM, and the speed was horrendous (about 3 hours to upload). I guess a beefy CPU and modern VM would do better these days, though?[/quote]i have a VM that runs updates on itself and uploads weekly as a scheduled task. looks like it averages 35 minutes according to the reports.
-
[quote=“Junkhacker, post: 43981, member: 21583”]i have a VM that runs updates on itself and uploads weekly as a scheduled task. looks like it averages 35 minutes according to the reports.[/quote]
Now that is really, really awesome! I never even thought of it!
[quote=“loosus456, post: 43963, member: 26317”] I guess a beefy CPU and modern VM would do better these days, though?[/quote]
FOG detects the number of cores when uploading, and uses them all… You could allocate 8 or 12 cores for that VM. IMAGINE the upload speeds! Win7 64 bit supports something like 256 cores, two physical sockets, so building the image wouldn’t be an issue.
-
Just came across this thread… When FOG uploads an image- if you are doing an upload of the FULL hard drive, all used and unused space must be read by the FOG client. When deploying an image, unused space is recognized in the image file and is much smaller than the size of the full hard drive. The compressed image is written to the FOG server very quickly.
When uploading a full image that has high compression %, the read speed of the originating hard drive will be the critical choke point. Get the hard drive with the fastest sequential read.
You could try the Toshiba MK3001GRRB. It is a 15k RPM SAS drive with sequential transfer speeds of 198.82MB/s read and 196.77MB/s write. If this is installed on a desktop, you will need a SAS controller.
-
Thinking more on this topic, a VM would be faster because the VM software already knows where the empty space is -thus skipping the physical reads, and the physical reads from disk would be fewer because the used space is compressed.
-
@JasonW said:
all used and unused space must be read by the FOG client.
Depends on the image type. Resizeable images resize the partitions first, so that only used space is captured.
When deploying an image, unused space is recognized in the image file and is much smaller than the size of the full hard drive. The compressed image is written to the FOG server very quickly.
Again, it depends on the image type. But generally image deployment is a lot faster than image capture.
Here’s a great article on the subject: https://wiki.fogproject.org/wiki/index.php/Image_Compression_Tests