Deploy way faster than Capture
-
This is more of a curiosity. The scenario is this:
Server is installed on Ubuntu 16.04, on a laptop with SSD.
I capture from a lab computer, connected to the same Gigabit switch as the server. Capture speed: around 2Gb/min.
I redeploy the image I just captured to the same computer: speed: around 10 Gb/min.Why is there such a big speed difference between the two operations?
-
Two things come to mind.
- Write speed on the FOG server SSD drives. Some SSD drives have great read speed, but sometimes 1/5 the speed for write. For example I tested one SSD and read speed was about 500MB/s but write speed was around 100MB/s.
- It take more effort to compress an image than it does to expand it. During a capture or deployment the target computer does all of the heavy CPU work. During an image capture It reads from the local disk, compresses the data and then transmits that data to the fog server. The fog server only moves the data from the network port to the local storage device. During a deployment the fog server moves the data from the local storage device to the network, and then the client takes over from there decompressing the image and writing it to the local hard drive.
This speed disparity is more evident with zstd compression than with the traditional gzip compression. This is because zstd image compression is slower, but zstd also decompresses faster. You have to consider in a normal fog configuration that you typically capture the image once and deploy many times. So for me having a fast deployment is much more beneficial than having a fast image capture.
In your case with a 10GB/min trasfer rate you can deploy a 25GB reference image in less than 3 minutes. Try hitting that speed with the other deployment tools. It is really hard to match or complain about.
-
@andreiv said in Deploy way faster than Capture:
Why is there such a big speed difference between the two operations?
Compression is CPU intensive, so it takes longer for the host you’re capturing from to capture.
Decompression is not CPU intensive, so for deployments the limiting bottleneck is normally a hard disk somewhere, usually the host’s hard disk (if you’re only deploying a single image. If you’re deploying hundreds, the network becomes the bottleneck).
Also consider how great of a thing Compression is. It allows you to capture from a host that has a 500 GB HDD with 250GB of that 500 used, and store it on the FOG Server which might take up 100GB of space thanks to compression.
-
Thank you for your answers.