PartImage faster than PartClone?
-
@scgsg said in PartImage faster than PartClone?:
if i configure it as partimage, the capture is done in partclone (and it will change the setting to partclone after capture) but deployment is quicker than an image captured with image manager set to partclone.
Most people on the internet are aware of the scientific method. How were you testing? How many times? What were the constants? The variable here is what the image manager is set to - all other things should be constant, and all other things would include every bullet that @Tom-Elliott posted below. And this would be repeatable. I would be able to conduct your test at home here and find the same results.
-
@Wayne-Workman let me save you the trouble of testing:
case $imgFormat in 6) # ZSTD Split files compressed. zstdmt --ultra $PIGZ_COMP < $fifo | split -a 3 -d -b 200m - ${file}. & ;; 5) # ZSTD compressed. zstdmt --ultra $PIGZ_COMP < $fifo > ${file}.000 & ;; 4) # Split files uncompressed. cat $fifo | split -a 3 -d -b 200m - ${file}. & ;; 3) # Uncompressed. cat $fifo > ${file}.000 & ;; 2) # GZip/piGZ Split file compressed. pigz $PIGZ_COMP < $fifo | split -a 3 -d -b 200m - ${file}. & ;; *) # GZip/piGZ Compressed. pigz $PIGZ_COMP < $fifo > ${file}.000 & ;; esac
this is the code that the image format setting gets used for on uploads. the default partclone image format is “1” partimage is “0”
as you can see, literally the same thing is done on an upload with either of those two settings set with regard to how the image is captured. -
@Tom-Elliott Wanted to say that I hope haven’t in anyway (either by implication or statement) come across as negative or complaining. If I have, that’s my bad as I think this project is great and I am very thankful for all your hard work. My intention was actually to see if it was expected behavior and/or maybe find the best way to get the most efficient deployment strategy.
Thank you for pointing out the main things that can/will have an impact on speed.
- The gig switch only has the 2 test pc, vsphere host (that the fog server sits on and there are no other VMs on this host) and a connection to the rest of the network. The gig switch has default config on it, is there any particular config i need to pay attention to? given that I havent used multicast deployment in any of my tests.
- I will keep that in mind particularly with our older machines but at the moment all tests are done on the same pc
- Will keep that in mind too.
- I think i intend on using ZSTD ultimately but havent for my tests at the moment as capture speeds are very slow i.e. with compression set at 6, it was over an hour to capture.
- Interesting, can I clarify your statement ref max compression, is it not recommended for any zstd or recommended for zstd. I assume its not recommended due to the reasoning given, and what would you say is the optimal compression level on average (I realise this is dependent individual cases and environment and you probably cant give an answer).
- Will keep this in mind too but plan is to have client on the same switch where possible.
@Wayne-Workman Well initially i was testing difference between old partimage deployment vs a new partclone deployment so initial partclone captured deployments times were used as a comparison against the later partimage setting captured deployments (yes i know its actually a partclone capture so there should be no difference). Everything is set the same and the only thing different is the image manager setting. Currently this would be 4 deployments where partclone was set and 4 deployments where partimage was set. I wouldnt say the sample is big enough to be conclusive but is what i have at the moment. At the moment I am in the middle of retesting all of it by creating another 2 images, 1 image is captured with partimage set and the other is with partclone set. Everything else remains the same i.e. same client, same image settings (except image manager setting), same switch with the same config, same process of capture (as described by Tom earlier, post 8 ) etc.
-
@scgsg i would like to point out that, unless the client you’re using for testing is similar to the clients you’re deploying to, your benchmarks aren’t going to be very useful for you. your test client is using a processor that was a economy model when it was released almost 7 years ago. zstd and pigz are optimized for modern efficient multi-threading systems, and i suspect your Pentium isn’t taking advantage of them very well.
personally i use zstd compression level 11, as i find it has nearly the same upload speed as gzip at compression level 6 while making the images 26% smaller and deploy 36% faster. again, that is on more modern hardware than you’re using, your results will vary.
zstd compression level 19 is the highest normal compression level. above 19 are “ultra” compression levels that require massive amounts of ram.
-
@Junkhacker Yes fair point and the client I’m using is one from the network with the majority clients being the same or equivalent.
Interesting, that certainly gives me an idea as how to approach zstd images when i start looking at this.
Ah ok, thank you for clarifying.
-
@Wayne-Workman Ok this really is odd, as stated I’ve created 2 images with everything the same except the Image Manager setting before capture and there is a difference in deployment times. Here is what I did:
Capture Images:
- Deployed the old image (created originally in partimage from fog 0.32) and set to shutdown after deployment
- Create a new Image in Fog with the following setting to match the old image:
a. For the purpose of testing lets call it Image1
b. Default Storage group
c. Windows 7
d. Multiple Partition Image - Single Disk (Not Resizable)
e. Partition = Everything
f. Level 3 compression
g. Image Manager = PartImage - Capture new image and set to shutdown after deployment
- Deployed old image again and shutdown after deployment
- Create another image in Fog with the same settings as above but call this Image2 and Image Manager set to PartClone
- Capture this image and set to shutdown after deployment
NOTE: Captured images are exactly the same size on the server (35061064).
Test Deployments:
Each deployment is set to shutdown after deployment.- Deploy Image1
- Deploy Image1
- Deploy Image2
- Deploy Image2
- Deploy Image1
- Deploy Image2
Here are the results:
Image1 Image2
16min 29sec 18min 30sec
16min 27sec 18min 30sec
16min 29sec 18min 27secAgain still not really a large enough sample to be definitive but does seem to imply an odd pattern.
-
@scgsg said in PartImage faster than PartClone?:
16min 29sec 18min 30sec
16min 27sec 18min 30sec
16min 29sec 18min 27secWhich settings did these use?
-
@Wayne-Workman Not sure what you mean, I’ve given details for the settings for Image1 and Image2 so what settings are we talking about?
-
@scgsg said in PartImage faster than PartClone?:
mage1 and Image2 so what settings are we talking about
What times are image one, what times are image two?
-
@scgsg I’m confused what this is showing. From what you’re saying, the deploy times are the “Updated” images? What version of FOG are you running? Level 3 compression is “small” comparatively. We have “benchmarks” for a reason. Your mileage may vary. But it’s up to you to find the “goldilocks” configuration.
You haven’t shown us any useful benchmarks.
If you’re trying to show us variance, we need to SEE the variance.
I’m assuming by your stuff below, the image deployment times for all three are the “averages” but what is this compared with/to? What other variables are present?
is the network working optimally?
Is there a single hop to the client’s receiving the images or multiple?
Are all machines identical in specs (Maybe use a single machine to perform the tests?)
Is your FOG 0.32 server configured and identically spec’d to your newer FOG Server?Like I said earlier, there are MANY reasons for speeds to vary between one another. Once you rule out ALL of those variances you can get a solid benchmark.
If your FOG 0.32 server has 8 CPU’s and 16GB of ram, then your “newer” system should be identical in EVERY way possible. (If a VM, use one with 0.32 grab snapshot. Change out with 1.x.x and grab snapshot. Switch between the two snapshots)
Your network should be identical during your benchmarks.
Your machines should be identical during your benchmarks.When presenting, you should show the same testing from the different servers to present the information accordingly.
For example:
Test set 1 FOG 0.32: (I know you can’t do partclone testing here)
Image: Configuration: Time Capture: Time Deploy: Number of times tested.Test set 2 FOG 1.x.x (Import your 0.32 image and also present information less the capture.
Image: Configuration: Time Capture: Time Deploy: Number of times tested.- Deploy Image 1 (What is this, one partimage, one partclone?)
- Deploy Image 1 (What is this, one partimage, one partclone?)
-
From my understanding of it:
He captured the exact same image with the same image definition, except for the image manager (partimage vs partclone) and then noticed different deploy times.
-
@Wayne-Workman Ah Ok, sorry formatting of the post didnt make it very obvious as i tried to make it like a table. Anyway hopefully this is clearer:
Image1 Settings before capture
Default Storage group
Windows 7
Multiple Partition Image - Single Disk (Not Resizable)
Partition = Everything
Level 3 compression
Image Manager = PartImage
Image1 Deployment Times
16min 29secs
16min 27secs
16min 29secsImage2 Settings before capture
Default Storage group
Windows 7
Multiple Partition Image - Single Disk (Not Resizable)
Partition = Everything
Level 3 compression
Image Manager = PartClone
Image2 Deployment Times
18min 30secs
18min 30secs
18min 27secsThe Fog server is 1.4.0 and all image deployment and capture is done on this server. Everything else remains constant i.e. no changes to fog server other than changing which image client uses, same switch (no configuration changes), same client and settings as detailed above. The deployment times are pulled from the Imaging Log and are the actual times recorded.
I can confirm that both images capture with partclone and both deploy with partclone, with both images having the exact same size on server so both images should clock similar times but oddly enough image1 clocks 2mins faster (with the only difference being the Image Manager for Image1 set to PartImage for capture).
-
@scgsg what are the specs of your server? have you tried capturing them in reverse order?
-
First, historical data: https://wiki.fogproject.org/wiki/index.php?title=Image_Compression_Tests
The below tests were performed with the FOG working branch just after the 1.4.1 release, commit
b4544755284c6914fb880e4d41b45c3a6ca41d57
. The image being used was a Windows 10 image, approximately 11GB in size uncompressed. All the testhosts have 3 cores, 4GB of RAM, and SSD disks. The fog server has 1 core, 1GB of RAM, and a mechanical disk. The image was always set to “Single Disk - Resizable.” There are no hops between the clients and the fog server, and there were no other loads on the fog server or clients or other physical hardware. All test results were achieved programmatically.Partimage - compression 3
Image capture of “testHost1” completed in about “4” minutes.
Completed image deployment to “testHost1” in about “3” minutes.
Completed image deployment to “testHost2” in about “5” minutes.
Completed image deployment to “testHost3” in about “7” minutes.
All image deployments completed in about “9” minutes.Partclone gzip - compression 3
Image capture of “testHost1” completed in about “4” minutes.
Completed image deployment to “testHost1” in about “3” minutes.
Completed image deployment to “testHost2” in about “5” minutes.
Completed image deployment to “testHost3” in about “7” minutes.
All image deployments completed in about “9” minutes.Partclone zstd - compression 3
Image capture of “testHost1” completed in about “3” minutes.
Completed image deployment to “testHost1” in about “3” minutes.
Completed image deployment to “testHost2” in about “5” minutes.
Completed image deployment to “testHost3” in about “7” minutes.
All image deployments completed in about “9” minutes.Just to point out - it’s not fair to compare zstd with gzip or pigz. This is because zstd’s compression scale goes from 1 to 22 while gzip & pigz compression scale is from 1 to 9 I think. Having gzip and pigz at 3 is much higher compression than zstd at 3. In effect, pigz and gzip were at about 30% compression while zstd was at about 15% compression. Yet zstd still outperformed. So for fun, I ran the below tests.
Partclone zstd - compression 12
Image capture of “testHost1” completed in about “5” minutes.
Completed image deployment to “testHost1” in about “3” minutes.
Completed image deployment to “testHost2” in about “4” minutes.
Completed image deployment to “testHost3” in about “6” minutes.
All image deployments completed in about “8” minutes.Partclone zstd - compression 17
Image capture of “testHost1” completed in about “13” minutes.
Completed image deployment to “testHost1” in about “3” minutes.
Completed image deployment to “testHost2” in about “4” minutes.
Completed image deployment to “testHost3” in about “6” minutes.
All image deployments completed in about “8” minutes.@scgsg I cannot replicate your findings. I don’t understand nor can I explain you’re findings having a two minute difference. My findings suggest as I expected that setting the Image Manager to “partimage” or “partclone gzip” results in the same performance because infact “partimage” gets changed to “partclone gzip” during capture.
Notes on my findings - an 11GB image is small for a Windows image. Typically Windows images for actual production deployment are larger, generally from 20GB to 30GB (in 2016). I’ve seen people deploy images as large as 160GB though. The larger the images are, the more ZSTD will pull ahead of the others and have a clear-cut superior performance. All research on it shows this, and big images in FOG will show it, too.
-
@Wayne-Workman Thank you for taking the time to look into this, and I’ll just chalk it up to weird things that happens to me. Thank you for all your efforts and advise.