PartImage faster than PartClone?
-
@george1421 Ok just to clarify, I’m deploying both images from the same fog server, which is a VM created on vmware vsphere and sits on Ubuntu Minimal Server 14.04.5. The comparison I’m doing is from the same fog server deploying 2 images (not at the same time naturally) and is sent to the same client each time. 1 image is the old image (34GB on server) from the old v0.32 server, which is PartImage and the other image is the same image recaptured (33GB on server), which captures in PartClone with compression set at 6. Both fog server and client are on the same gig switch. The client machine is a rebadged intel box with Pentium Dual Core 3GHz, 4G RAM, 1G LAN and 150GB SATA HDD.
Admittedly I haven’t been looking at the estimated speeds but the actual time take to deploy which at last test was 12min 3 secs for the old PartImage image and 17min 33secs for the new PartClone image. This was my second test and PartClone is a lot faster than previous test so for now I will say that I must have made an error recording the first test. That being said, PartClone is still slower than PartImage and this is under the best of conditions.
Thank you for the information, it is certainly interesting as a comparison.
I found a post here that indicated level of compression may have a bearing on download speeds and potentially higher level of compression could be better but this would have differing results depending on client machine specs.
-
To further drive home the differences. Partimage is from 0.32 and before.
It’s update to display is faster than Partclone’s as partclone takes at least 1 second where partimage is NRT (Near real time). I realize 1 second is also near real time, but if you sit both side by side, one will appear to update faster than the other. But update faster does not mean it’s deploying data faster.
The reason why knowing where Partimage was used, vs. wasn’t, is because 0.32 was designed, primarily, for Windows XP machines. It had been written for Windows 7 sure, but there is a huge size disparity. A fresh windows xp image might only use 3-13 gb on disk and the same type of setup for windows 7 might use between 10-20 gb on disk. (Assuming approximately 10gb extra for windows updates and what not.)
The size matters as the amount of read-write that has to happen on the disk, the data that needs to be decompressed, and the amount of data the network is passing all play a vital role in how it handles the speed at which things happen.
-
@Tom-Elliott Thank you for that, useful to know. Just to clarify, thats update the display? so does not effect the accuracy of the recorded logs?
Just wanted to confirm that my initial statement (i.e. PartClone takes 3 times longer) is in correct as I’ve checked the Image Log. It doesn’t take as long to deploy as i initially stated but it is still slower than PartImage deployment, ~5-6min longer to deploy a PartClone image that is 33GB on server (according to the log).
-
@scgsg Again, you need to compare apples to apples.
If you have a partimage image that’s exactly the same as your partclone image, I imagine you’d find relatively nearly the same deploy times.
-
A way to test, semi accurately:
- Deploy your partimage image to a machine. Do not let the machine boot into the system.
- Create a new image definition. Make sure it’s all the same (compression for partimage was always set at 3. Image manager should be Gzip, not split.
- Assign that new image definition to that same machine.
- Capture that image and do not let it boot into the system when complete.
- Test both image deploy’s.
-
@Tom-Elliott Ok only difference between your advise and what i’ve done so far is that I’ve used compression level 6 with Gzip for the PartClone image. I’ll do another test, and capture a PartClone image with compression set to 3. Will see how that affects the deployment times.
-
@Tom-Elliott Interesting, this time deploying the old PartImage image was 12min 32secs (pretty much expected) and deploying the PartClone image with compression set at 3 clocked in at 17min 9secs (not majorly different from compression set at 6). Whats even more interesting is that I also did a test with the compression level at 9, that clocked in at 15min 47secs and the PartImage deployment clocking in at a 12min 21secs. This is strange as the size on server isn’t hugely different between compression set at 6 and 9 (both rounded to 34GB).
With all that said it still looks like PartImage comes in faster, this obviously isnt conclusive as I’d need to do a lot more tests but it certainly implies a pattern. Thing is, I’m not sure how this would impact deployment on a larger scale i.e. just because PartClone takes longer it may not necessarily affect the deployment on a larger scale in a negative way but potentially the extra 5 minutes it takes could mean it will take even longer to deploying to a larger selection of computers with slower specs and LAN speeds. Something that use to take half a day may now take the entire day due to this?
-
@Tom-Elliott Don’t suppose you have any suggestions? be it server config or switch config or something? I am somewhat at a loss as to why higher compression equals faster deployment (or so it seems) but the size on server remains pretty much the same so the increased speed is not due to smaller file transfer. Not entirely sure how this impacts on deployment to multiple computers.
-
@scgsg if you want to optimize for speed, i suggest switching to zstd compression. it works better with modern multi-core multi-thread processors than the previously used types. as for whether partimage or partclone is faster by themselves, i consider it a moot point since partimage has not been under active development in 7 years. partclone might be slower due to it’s built in checksums.
-
@Junkhacker I did try zstd in one of my earlier tests and it clocked in at 15min 33secs so it is quicker. I’m not 100% sure but i think i left compression at default 6. I didnt test this any further because it took 1 hour 6 mins to capture the image. Testing this one for optimal compression is going to take a while.
Fair enough, if thats expected behavior between partimage and partclone, it is what it is and as you say a moot point. Furthermore its not like i can capture with partimage anyway. That being said, i have a real head scratcher with my latest test. By chance I captured a new image with image manager set to partimage and it captured using partclone so no surprises there. However heres where it gets odd, I deployed this image and its clocking in at just over 15 mins. I’ve done this 3 times now, twice to 2 computers at the same time and once to just 1 computer. Deployment times were between 15-16 mins. In between these tests i did retest deploying an image that had been captured with image manager set to partclone, and the result was 17min 35sec (similar times as before). This is confusing, why would setting image manager to partimage create an image that is faster to deploy? this is even more confusing as I checked the file size on the server for the captured images (one set to Partclone and the other set to Partimage for capture) and they are exactly identical. I really dont understand how I can get image deployment 2 mins faster just because I set it to partimage for the capture, given that it doesnt capture using partimage and uses partclone anyway. Any idea what the heck is going on there?
-
@scgsg since we no longer support capturing with partimage, a capture image set to partimage is captured using partclone with literally the exact same code as if it had been properly set to partclone to begin with. I suspect external factors are at play causing any difference in speed you are seeing.
-
Using FOG 1.4 and ZSTD with partclone, my entire deploy time is 3 minutes. That’s from the moment the power button is pressed until the moment the task completes in FOG Task Manger. Using a Win10 image about 11GB in size, server disk is mechanical, host disk is SSD.
-
i have similar conditions but my image is a bit bigger around 15-16gb and i deploy in 2-3 minutes.
-
@x23piracy
I can also confirm, with multiple pieces of hardware (In this case, Dell Optiplex 790, Dell E7450, Lenovo Thinkpad T520, and Thinkpad X140e), that zstd and 1.4 will give me sub 5 min deploys of a fully configured Win10 image, including Office, Adobe Suite, and all Win. Updates.I routinely deploy to 100+ devices a week, in both uni and multi cast. Fog server is an old P4 with 4 gigs of ram and 4 nics in a LACP bridge.
-
Thank you for all the responses, i’ve been away for a couple of day and didnt get a chance to look at this. ZSTD is certainly quicker on the 1 test I did but i havent tested any further as it took over an hour to capture. Still it was only as quick as my most recent test with partclone gzip.
@Junkhacker yes I am aware captures are no longer done with partimage and that is my point, if i configure it as partimage, the capture is done in partclone (and it will change the setting to partclone after capture) but deployment is quicker than an image captured with image manager set to partclone. I dont know why that is, I’ve done a number of tests now and each time the deployment is 2 mins quicker. I cant fathom why 1 setting, which ultimately make no difference in the capture of the image, will deploy quicker.
-
@scgsg There are may things that impact the speed, regardless of the manager in use.
-
Network, the most obvious one here, would play a big factor.
-
System, as decompression is handled on the client.
-
Disk read/write speeds, this one is probably the largest factor on most modern systems.
-
Compression type, ZSTD compresses and decompresses much faster than gzip when in multithreading.
-
Compression Ratio, 19 is the maximum I’d recommend for any zstd as the memory required for any higher makes it nearly impossible to perform on most systems. 0 is the minimum, it still compresses but is nearly nothing. This impacts speeds because of the data that would need to transfer over the network. 1GB plain on a 1gbps will take anywhere between 5-10 seconds on a “perfect” network. 1GB compressed to 50% would take half the time to transfer because the data size would be only about 500MB. The speed it writes to disk would primarily be slowed down by the system decompressing the system. You must think of this when capturing too because while it will ultimately mean faster deploy’s, this setting can slow the capture process quite a lot.
-
Hops over network. The most direct route to a system will give you the fastest speed. If you have to jump through 10 switches to get to the host, your delays are somewhat related to the network as the data has to traverse the different points to reach the target.
These, obviously, aren’t everything that might impact speeds, but a basic list of what I can think of for right now.
As you’ve seen with the many other posts however, your case appears to be the exception. I understand there may be others with similar results to your own, but please understand we aren’t making these changes to make your day go longer. If anything we’ve made these changes so the vast majority of people will have a faster deploy/capture time. It also helps that partclone is in active development where partimage seems to have lost their develoment.
-
-
@scgsg said in PartImage faster than PartClone?:
if i configure it as partimage, the capture is done in partclone (and it will change the setting to partclone after capture) but deployment is quicker than an image captured with image manager set to partclone.
Most people on the internet are aware of the scientific method. How were you testing? How many times? What were the constants? The variable here is what the image manager is set to - all other things should be constant, and all other things would include every bullet that @Tom-Elliott posted below. And this would be repeatable. I would be able to conduct your test at home here and find the same results.
-
@Wayne-Workman let me save you the trouble of testing:
case $imgFormat in 6) # ZSTD Split files compressed. zstdmt --ultra $PIGZ_COMP < $fifo | split -a 3 -d -b 200m - ${file}. & ;; 5) # ZSTD compressed. zstdmt --ultra $PIGZ_COMP < $fifo > ${file}.000 & ;; 4) # Split files uncompressed. cat $fifo | split -a 3 -d -b 200m - ${file}. & ;; 3) # Uncompressed. cat $fifo > ${file}.000 & ;; 2) # GZip/piGZ Split file compressed. pigz $PIGZ_COMP < $fifo | split -a 3 -d -b 200m - ${file}. & ;; *) # GZip/piGZ Compressed. pigz $PIGZ_COMP < $fifo > ${file}.000 & ;; esac
this is the code that the image format setting gets used for on uploads. the default partclone image format is “1” partimage is “0”
as you can see, literally the same thing is done on an upload with either of those two settings set with regard to how the image is captured. -
@Tom-Elliott Wanted to say that I hope haven’t in anyway (either by implication or statement) come across as negative or complaining. If I have, that’s my bad as I think this project is great and I am very thankful for all your hard work. My intention was actually to see if it was expected behavior and/or maybe find the best way to get the most efficient deployment strategy.
Thank you for pointing out the main things that can/will have an impact on speed.
- The gig switch only has the 2 test pc, vsphere host (that the fog server sits on and there are no other VMs on this host) and a connection to the rest of the network. The gig switch has default config on it, is there any particular config i need to pay attention to? given that I havent used multicast deployment in any of my tests.
- I will keep that in mind particularly with our older machines but at the moment all tests are done on the same pc
- Will keep that in mind too.
- I think i intend on using ZSTD ultimately but havent for my tests at the moment as capture speeds are very slow i.e. with compression set at 6, it was over an hour to capture.
- Interesting, can I clarify your statement ref max compression, is it not recommended for any zstd or recommended for zstd. I assume its not recommended due to the reasoning given, and what would you say is the optimal compression level on average (I realise this is dependent individual cases and environment and you probably cant give an answer).
- Will keep this in mind too but plan is to have client on the same switch where possible.
@Wayne-Workman Well initially i was testing difference between old partimage deployment vs a new partclone deployment so initial partclone captured deployments times were used as a comparison against the later partimage setting captured deployments (yes i know its actually a partclone capture so there should be no difference). Everything is set the same and the only thing different is the image manager setting. Currently this would be 4 deployments where partclone was set and 4 deployments where partimage was set. I wouldnt say the sample is big enough to be conclusive but is what i have at the moment. At the moment I am in the middle of retesting all of it by creating another 2 images, 1 image is captured with partimage set and the other is with partclone set. Everything else remains the same i.e. same client, same image settings (except image manager setting), same switch with the same config, same process of capture (as described by Tom earlier, post 8 ) etc.
-
@scgsg i would like to point out that, unless the client you’re using for testing is similar to the clients you’re deploying to, your benchmarks aren’t going to be very useful for you. your test client is using a processor that was a economy model when it was released almost 7 years ago. zstd and pigz are optimized for modern efficient multi-threading systems, and i suspect your Pentium isn’t taking advantage of them very well.
personally i use zstd compression level 11, as i find it has nearly the same upload speed as gzip at compression level 6 while making the images 26% smaller and deploy 36% faster. again, that is on more modern hardware than you’re using, your results will vary.
zstd compression level 19 is the highest normal compression level. above 19 are “ultra” compression levels that require massive amounts of ram.