SOLVED Very slow cloning speed on specific model

  • Moderator

    @Duncan I don’t have Acronis 2019 to test but I found an ISO download on their official website (dl.acronis.com) and booted that in a VM. On bootup I definitely see Linux kernel message and using Ctrl+Alt+F2 I as able to switch to a virtual terminal. Got kernel version 4.9.51-Acronis-b12-x86_64

    We used kernel version 4.9.x in 2017 - see here: https://fogproject.org/kernels/

    Though we don’t have the exact same kernel version yet we can still try to get close to that. We’ll still miss the Acronis specific patches (not sure but I guess they use some modifications) but it’s worth testing. We will build specific kernel and initrd for you.

  • Moderator

    @Duncan said:

    Acronis True Image 2019

    Ok, reading some more on the web I found an article saying that Acronis actually has two different bootable medias - one WinPE/WinRE-based and one Linux-based. Do you have Acronis installed on one of your Windows PCs? Please see if you can launch the “Rescue Media Builder” as described here and create a Linux-based boot media. Just want to make sure we don’t compare apples with pears here when we see Acronis doing it fast.

    Switching between virtual terminals in Linux is usually done by pressing Ctrl+Alt+F1 or F2…F7 - see if that gets you to a console.


  • Acronis True Image 2019

    cant see a way to drop into console to run that command.


  • @Sebastian-Roth

    Happy to help out where i can, we also have another batch of 60 coming. Will be interesting to see the success rate of these ones.

    When i spoke to HP he was unsure of how to move forward with this. Hopefully gets escalated to there tech guys. I asked about and firmware updates, but he said only the BIOS was available.

  • Moderator

    @Quazz said in Very slow cloning speed on specific model:

    As I understand it, Acronis uses WinPE, supporting the idea that this some kind of Linux problem, but it appears to be hard to track down exactly how and why it happens.

    When you first mentioned Acronis I didn’t think much further. But now that you say WinPE I had to look this up. Not sure if newer versions are WinPE based but the older ones all were Linux based: https://kb.acronis.com/content/1537

    @Duncan Which version of Acronis do you have? Any chance you can get to a console and run uname -a to get the kernel version for us?


  • @DeRo93 We didn’t get the issue fixed but ran out of G6 laptops to test with. We’re expecting the next batch of 50 on Tuesday so I expect to be more active on here next week.

  • Moderator

    @Duncan As I understand it, Acronis uses WinPE, supporting the idea that this some kind of Linux problem, but it appears to be hard to track down exactly how and why it happens.

  • Moderator

    @Duncan Good to know you have a workaround for now. But we will need you to keep on testing things as we don’t have the hardware to test and work on this.

    I will read through all of this over the weekend and see what steps we could take next.


  • @Duncan

    In the meantime i have started to image with Acronis, imaging disk to disk in about 7 minutes. This will do until a fix is found. Many thanks to everyone for all the help

  • Moderator

    @Duncan Ok, nice! Well not nice that’s it’s going slow but we know it’s not specific to the FOS kernel/initrd.

    As well we know it’s not a general problem with Linux, right?!? dd speed is fine. The interesting thing seems that in the topic Quazz posted we read:

    I could trick it into being fast by having the disk mounted: dd to an unmounted drive wrote slow, and dd to a mounted drive or using O_DIRECT wrote fast.

    As far as I understand you dded to an unmounted drive and it went fast as well. So maybe this is a different issue?!

    Maybe we should get in touch with Thomas Tsai…?


  • @Sebastian-Roth

    alt text

    going so slow!

    will upload pic of the “working” one soon

  • Moderator

    @Duncan Try using https://gparted.org/livecd.php or https://clonezilla.org/clonezilla-live.php - I have not tried any of those lately but I am fairly sure you can boot up to a command shell with those.

    mkdir -p /mnt/ramdisk
    mount -t tmpfs -o size=2048m tmpfs /mnt/ramdisk
    partclone.dd -d3 -C -s /dev/nvme0n1 -O /mnt/ramdisk/test.img
    partclone.dd -d3 -C -s /mnt/ramdisk/test.img -O /dev/sda
    

    Note that the first partclone command will also kind of error out when the ramdisk drive is full. But test.img still seems to be ok - from what I tested. Then reply it back to disk.

    Do this same test on both devices, the one that is showing the issue and the one that seems ok.

  • Moderator

    @Quazz Good point!!!

    @Duncan So we better do partclone tests then. I’ll look up the syntax later on.

  • Moderator

    @Duncan said in Very slow cloning speed on specific model:

    “no space on device” will i try to format the ssd and run tests again?

    I’m suspecting the no space on device is because you are writing to the raw disk and not to a partition. The space is all consumed because the partition tables are in place that occupy the entire disk. Linux is a bit strange in that you can write to the raw disk as you could a partition on that disk. Of course when you do that you destroy the partition table and the files in the partition.


  • @george1421

    “no space on device” will i try to format the ssd and run tests again?

    I will quickly build windows off usb and test.

    The Linux was just Ubuntu 18.04.3 LTS, used Rufus to put it onto a USB and live booted.

    Will i try the same test with a “working” laptop and see if we get the same speeds.

  • Moderator

    @george1421 Regardless, partclone is much slower for him than even this dd test. In the github issue thread I linked, they hypothesize that dd uses O_DIRECT and partclone does not, hence the difference.

    It’s also worth noting that they had better speeds when the target was mounted, which live Ubuntu does automatically when it can.

    Of course this is working under the assumption that it’s the exact same issue.

  • Moderator

    @Duncan Unless the "no space on device’ is causing this number disparity you have 1GB/s read and 116MB/s write speeds. The 116MB/s write speeds is just faster than a fast rotating disk speed. Those numbers are using a commercial linux distribution. (Please state the version number of ubuntu live you used)

    I’m wondering now 2 things.

    1. If you booted into FOS Linux in debug mode and ran the same commands would you see similar results?
    2. I know there is no correlation with this, but if you booted into windows and then ran a tool like Crystal Disk Mark or ATTO to see if we have this wide differences between read and write speeds? Now this will be using the windows kernel and windows driver. This way we can see if its a windows/linux driver issue or if the hardware just performs this way.
  • Moderator

    @Duncan Please replace the dots with the appropriate ending for the drive. (eg 0n1) and try again.


  • @Sebastian-Roth

    root@ubuntu:/home/ubuntu# dd if=/dev/nvme0n1 of=/dev/null
    500118192+0 records in
    500118192+0 records out
    256060514304 bytes (256 GB, 238 GiB) copied, 231.216 s, 1.1 GB/s
    root@ubuntu:/home/ubuntu# dd if=/dev/zero of=/dev/nvme0n1
    dd: writing to '/dev/nvme0n1': No space left on device
    500118193+0 records in
    500118192+0 records out
    256060514304 bytes (256 GB, 238 GiB) copied, 2204.34 s, 116 MB/s
    

    I’ve emailed my guy at HP and will see what they say.

    Going to upload from “slow” laptop now and see what speeds i get.

  • Moderator

    @Sebastian-Roth I realize we are discussion nvme drives here and only for a point of reference, but in 2017 I created a benchmark post to compare the differences in different technologies (one being disk subsystem) and its impact on imaging. https://forums.fogproject.org/topic/10459/can-you-make-fog-imaging-go-fast/6

    From there I had these two commands:

    … the simple disk baseline I’m using the following linux command to create a sequential 1GB file on disk and then to read it back. This process is designed to simulate the single unicast workload. The command used to write the 1GB file is this:
    dd if=/dev/zero of=/tmp/test1.img bs=1G count=1 oflag=direct
    The command to read it back is:
    echo 3 | tee /proc/sys/vm/drop_caches && time dd if=/tmp/test1.img of=/dev/null bs=8k
    The echo command is intended to disable the read cache so we get a true read back value.

377
Online

9.0k
Users

15.6k
Topics

145.2k
Posts