Very slow cloning speed on specific model
-
@Duncan Are the BIOS versions the same on these laptops? Settings too?
-
Yes, iv updated to the latest 01.03.03 Rev.A, and my tech guys setup all the BIOS in the same way.
Im going to rip through the 2 laptops and reset BIOS, then make sure all settings are duplicated.
Going to see if i can pull some more hardware info and see if there are any differences there…
-
@Duncan Perhaps they are using different NVME controllers for some reason. Should be interesting to check it out at least.
-
@Quazz This is what im starting to think, be back with some findings soon
-
Running HWInfo im failing to see any differences between the 2 laptops. Even in the BIOS all settings and hardware info look the same…
-
@Duncan Can you schedule debug deploy task on both machines. When you get to the shell run
lspci -nn
andhdparm -i /dev/sda
. Take pictures and post here. -
-
For some reason this reminded me of some of the earliest (and most explored) reports on slow deployments on certain system drive combinations.
https://github.com/Thomas-Tsai/partclone/issues/112
That user notes that if a partition is formatted and mounted on the target disk just prior to restore that it runs at expected speeds (but in FOS only if it’s not NTFS for whatever reason). Next attempt to restore will be slow again unless the same step is taken.
They then went on to try a different drive (different brand) and that one worked normally.
-
@Duncan Thanks for the pictures. Though I think it was such a great idea because comparing the two listings as pictures is very much error prone. I reckon both listings are identical but can’t say for sure absolutely. And even then we still don’t know if one model might have just a slightly different revision of some component.
Is it only writing to disk or also reading from disk? Even if it doesn’t make sense for you to capture from the “slow” model, can you still give it a try to see!?
Also I wonder if you could live boot some Linux CD/DVD distro and do write/read tests as well.
dd if=/dev/zero of=/dev/nvme... dd if=/dev/nvme... of=/dev/null
NOTICE: Be aware the first command will completely destroy the data on your drive!! Only do this on machines you don't have valuable data on and can re-image again.
Did you get to talk to HP about this issue? What do they say?
-
@Sebastian-Roth I realize we are discussion nvme drives here and only for a point of reference, but in 2017 I created a benchmark post to compare the differences in different technologies (one being disk subsystem) and its impact on imaging. https://forums.fogproject.org/topic/10459/can-you-make-fog-imaging-go-fast/6
From there I had these two commands:
… the simple disk baseline I’m using the following linux command to create a sequential 1GB file on disk and then to read it back. This process is designed to simulate the single unicast workload. The command used to write the 1GB file is this:
dd if=/dev/zero of=/tmp/test1.img bs=1G count=1 oflag=direct
The command to read it back is:
echo 3 | tee /proc/sys/vm/drop_caches && time dd if=/tmp/test1.img of=/dev/null bs=8k
The echo command is intended to disable the read cache so we get a true read back value. -
root@ubuntu:/home/ubuntu# dd if=/dev/nvme0n1 of=/dev/null 500118192+0 records in 500118192+0 records out 256060514304 bytes (256 GB, 238 GiB) copied, 231.216 s, 1.1 GB/s root@ubuntu:/home/ubuntu# dd if=/dev/zero of=/dev/nvme0n1 dd: writing to '/dev/nvme0n1': No space left on device 500118193+0 records in 500118192+0 records out 256060514304 bytes (256 GB, 238 GiB) copied, 2204.34 s, 116 MB/s
I’ve emailed my guy at HP and will see what they say.
Going to upload from “slow” laptop now and see what speeds i get.
-
@Duncan Please replace the dots with the appropriate ending for the drive. (eg 0n1) and try again.
-
@Duncan Unless the "no space on device’ is causing this number disparity you have 1GB/s read and 116MB/s write speeds. The 116MB/s write speeds is just faster than a fast rotating disk speed. Those numbers are using a commercial linux distribution. (Please state the version number of ubuntu live you used)
I’m wondering now 2 things.
- If you booted into FOS Linux in debug mode and ran the same commands would you see similar results?
- I know there is no correlation with this, but if you booted into windows and then ran a tool like Crystal Disk Mark or ATTO to see if we have this wide differences between read and write speeds? Now this will be using the windows kernel and windows driver. This way we can see if its a windows/linux driver issue or if the hardware just performs this way.
-
@george1421 Regardless, partclone is much slower for him than even this dd test. In the github issue thread I linked, they hypothesize that dd uses O_DIRECT and partclone does not, hence the difference.
It’s also worth noting that they had better speeds when the target was mounted, which live Ubuntu does automatically when it can.
Of course this is working under the assumption that it’s the exact same issue.
-
“no space on device” will i try to format the ssd and run tests again?
I will quickly build windows off usb and test.
The Linux was just Ubuntu 18.04.3 LTS, used Rufus to put it onto a USB and live booted.
Will i try the same test with a “working” laptop and see if we get the same speeds.
-
@Duncan said in Very slow cloning speed on specific model:
“no space on device” will i try to format the ssd and run tests again?
I’m suspecting the no space on device is because you are writing to the raw disk and not to a partition. The space is all consumed because the partition tables are in place that occupy the entire disk. Linux is a bit strange in that you can write to the raw disk as you could a partition on that disk. Of course when you do that you destroy the partition table and the files in the partition.
-
-
@Duncan Try using https://gparted.org/livecd.php or https://clonezilla.org/clonezilla-live.php - I have not tried any of those lately but I am fairly sure you can boot up to a command shell with those.
mkdir -p /mnt/ramdisk mount -t tmpfs -o size=2048m tmpfs /mnt/ramdisk partclone.dd -d3 -C -s /dev/nvme0n1 -O /mnt/ramdisk/test.img partclone.dd -d3 -C -s /mnt/ramdisk/test.img -O /dev/sda
Note that the first partclone command will also kind of error out when the ramdisk drive is full. But test.img still seems to be ok - from what I tested. Then reply it back to disk.
Do this same test on both devices, the one that is showing the issue and the one that seems ok.
-
-
@Duncan Ok, nice! Well not nice that’s it’s going slow but we know it’s not specific to the FOS kernel/initrd.
As well we know it’s not a general problem with Linux, right?!?
dd
speed is fine. The interesting thing seems that in the topic Quazz posted we read:I could trick it into being fast by having the disk mounted: dd to an unmounted drive wrote slow, and dd to a mounted drive or using O_DIRECT wrote fast.
As far as I understand you
dd
ed to an unmounted drive and it went fast as well. So maybe this is a different issue?!Maybe we should get in touch with Thomas Tsai…?