Very slow cloning speed on specific model
-
@Duncan Too bad, driver for Intel I219-V (8086:15be) was added with kernel version 4.12! Possibly I can build a 4.9.x version for you that has the driver in it. Though I am not sure this will actually lead to anything. This week is really busy and I am not sure when I will get to this. Will keep you posted.
-
@Duncan After some deep digging in the kernel commit I found the ones that should add support for your NIC:
- https://github.com/torvalds/linux/commit/3a3173b9c37aa1f07f8a71021114ee29a5712acb
- https://github.com/torvalds/linux/commit/c8744f44aeaee1caf5d6595e9351702253260088
- https://github.com/torvalds/linux/commit/68fe1d5da548aab2b6b1c28a9137248d6ccfcc43
I will try to add those to 4.9.51 later on and let you know.
-
@Duncan Just updated the kernel in https://fogproject.org/kernels/bzImage-4.9.51
Please re-download and try again.
-
Yasss!
Uploaded the kernel got some errors about kernel being to old.
On the client removed the “Host Init” setting but left the “Host Kernel” as bzImage-4.9.5.1 and kicked off an image.
Build at a nice decent speed now. Built 3 so far all at full speed!!
Thank you so much for all the help @Sebastian-Roth @Quazz
-
@Duncan Sorry, but this doesn’t make any sense to me!
Using the new
bzImage-4.9.51
you should definitely also need to useinit-4.9.x.xz
! Please take a picture of the settings in the web UI and post here. As well runls -al /var/www/html/fog/service/ipxe/bzImage* /var/www/html/fog/service/ipxe/init*
and post the full output here.Build at a nice decent speed now. Built 3 so far all at full speed!!
Maybe it’s just some devices with drives that wouldn’t cause a problem using the latest kernel version as well???
-
Yeh, this is weird, i removed the settings from the other laptop i was building. And it still build… i guess it was a lucky laptop that could build.
Back to the original slow laptop, removed Host Kernel setting and its back to being slow.
Deleted original slow laptop from FOG, re registered it and added in bzImage-4.9.5.1 to host kernel. Host init still blank.
Deployed an image and its full speed.
ls -al /var/www/html/fog/service/ipxe/bzImage* /var/www/html/fog/service/ipxe/init* -rw-r--r-- 1 fog fog 8118832 Dec 18 2018 /var/www/html/fog/service/i pxe/bzImage -rw-r--r-- 1 fog fog 7562352 Dec 18 2018 /var/www/html/fog/service/i pxe/bzImage32 -rw-r--r-- 1 fog www-data 7465280 Jun 27 2017 /var/www/html/fog/service/i pxe/bzImage32_OLD -rw-r--r-- 1 www-data www-data 7942736 Dec 4 23:16 /var/www/html/fog/service/i pxe/bzImage-4.9.51 -rw-r--r-- 1 fog www-data 7601536 Jun 27 2017 /var/www/html/fog/service/i pxe/bzImage_OLD -rw-r--r-- 1 fog www-data 18646084 Jun 27 2017 /var/www/html/fog/service/i pxe/init_32.xz -rw-r--r-- 1 www-data www-data 19744348 Dec 1 07:31 /var/www/html/fog/service/i pxe/init-4.9.x.xz -rw-r--r-- 1 fog www-data 19605632 Jun 27 2017 /var/www/html/fog/service/i pxe/init.xz
-
Iv built a few more laptops now.
2 built straight out the box, no kernels or inits needed.
One had the slowness issue. In the host page i just added the kernel setting.
Deployed the image and away it went. Full speed. building about 8gb/min
-
@Duncan Can’t believe it but if it’s the way you saying (and showing in the pictures) - what can I say…
@Quazz gave me a good hint on kernel 4.10 or 4.11 introducing APST. We kind of expect this to be causing the problem. See some information on this here: https://wiki.archlinux.org/index.php/Solid_state_drive/NVMe#Power_Saving_APST
He also just added the
nvme
cli tools to the FOS initrds so we could try to work on debugging more of this with more recent kernel versions.It’s all up to you. If you are happy with the old
4.9.51
kernel we can just leave it like that. Though I don’t think it’s a great solution. -
@Duncan This suggests that it is indeed a kernel issue. Interesting that it ran at all with the newer inits since they’re only slated for backwards compatibility to 4.14 I believe.
My best guess at the moment is that the APST feature introduced in 4.10 is either the problem in its entirety or related to it somehow.
It’s still building, but when it’s done, there will be an init available at https://dev.fogproject.org/blue/organizations/jenkins/fos/detail/master/107/artifacts
EDIT: That build failed due to unrelated error, here is a different link https://drive.google.com/open?id=1u_HuN5NSpzb7YmQBAsrzDELteNmlWUWU
This will include an NVME cli utility that will give some info and allow some management over the NVME device.
I’d be interested in seeing a debug deploy on this init (use kernel 4.19 as well). If you could schedule one for a problematic host and run the following commands that would help a ton.
sudo nvme get-feature -f 0x0c -H /dev/nvme0
That will list out some info, the one I’m interested in is whether APST is enabled or not.
If it’s enabled you can disable it by doing
sudo nvme set-feature -f 0x0c -v=0 /dev/nvme0
Then type
fog
Press enter (you’ll have to do this a couple more times until it starts partclone and such)
I’m hoping that this will resolve the issue entirely and if so we can add to the inits if an NVME device is detected. APST is unneeded in FOS environment since we don’t care about power consumption of the storage device since it just needs to get captured or deployed and then the system takes it from there.
-
@Quazz said in Very slow cloning speed on specific model:
Downloaded new Init_partclone added to host. Set the Kernel to 4.19.
Ran commands, APST was enabled. I disabled it and started to image.
Now its hung on Restoring Partition Tables GPT…
Rebooted, and tried again. Its now building at 2.7gb/min.
One thing i did notice was my storage nodes where on an old kernel 4.11.0. I have now copied over the latest ones to the nodes.
-
@Duncan Thank you for trying it out. Very interesting results!
Much better than before, though not quite the speed you’d expect either.
@Sebastian-Roth What do you think? Should we investigate further?
-
i can live with these speeds. image is only 70gb.
Alot faster than three weeks.
Im going to test this on my other sites now and see what speeds i get.
Seems to be that APST though. I wonder if some have it enabled out of the box and others dont. Im going to run the command to check on a “working” laptop and see if its disabled by default.
-
@Duncan As far as I understand it’s only for specific drives on specific laptops (even amongst the same model), but it’s relatively widespread regardless.
Potentially slight firmware differences or the like.
Using that init file, you could add the command line that disables APST to images/dev/postinitscripts
-
@Quazz Do you see any issue with just disabling it for all nvme drives? I don’t know the impact if we did. FOS Linux is not a general purpose OS so we don’t really want or need any sleep functions at all. We really want the OS and the hardware to run as fast as possible and not be concerned about any power savings.
You are right about the postinit scripts. If we had the raw data, I’m sure we could come up with a script to disable this function on certain detected drives or just turn it off all together. Comments??
-
On a working laptop APST was enabled also.
So i guess it is i firmware or slight hardware difference.
With the APST disabled on this one again im seeing speeds of 2.8 - 3.0gb/min
-
@george1421 As far as I’m aware, all disabling APST does is lock the drive to its “highest power state”. Which for the purposes of FOS isn’t a bad choice if it would otherwise malfunction.
I don’t foresee a problem doing this for all NVME devices, but of course there might be instances we are unaware about currently where it does matter for something.
That said, FOS only runs for a little while, so odds of it being bad are very low.
-
@Duncan said in Very slow cloning speed on specific model:
Kernel 4.9.51 … Deployed the image and away it went. Full speed. building about 8gb/min
Is this all the way through or just top speed? Maybe it’s better you note down the full deploy time to compare the different situations more appropriately?!
latest kernel with APST disabled… Its now building at 2.7gb/min.
Does this really mean it’s that much slower than using the 4.9.51 kernel or is it more just a top speed thing? As I said, better we compare the time it takes to deploy the full drive.
@george1421 @Quazz I’d vote for disabling APST in FOS as we don’t need to save energy. The drive should go at full speed.
-
Definatly a difference in speeds.
Using bzimage 4.19 and init_partclone.xz got an average of 3gb/min
Using bzimage-4.9.51 and init.xz started at 7gb/min and dropped and hanging around 6.6(ish)gb/min
both tests on the same laptop
-
@Sebastian-Roth So I’m wondering 2 things.
- Before 1.5.8 comes out, could/should we create a post init script with the logic that might go into FOS Linux for 1.5.8 that would test the impact of this proposed change? This way if the change caused problems, deleting the script would fix it. (know I worded that a bit funny. But the idea is to test it with an approved post init script before its coded into 1.5.8. So if people have this issue, we can say place this script here and test. This would be for 1.5.7 and lower versions)
- Does the kernel parameter
nvme_core.default_ps_max_latency_us=0
have any impact on shutting off this feature right at the disk level? Better/worse/nochange? If it had a positive impact then that could be integrated into the post init script and then into FOS Linux 1.5.8.
-
@george1421 Yes, good points:
- It’s a good idea to provide a post init script right now for people to test. I am not exactly sure what part is doing it. I think it’s
nvme set-feature -f 0x0c -v=0 /dev/nvme0
right? @Duncan @Quazz - Would you like to help testing as well, @oleg-knysh? - I have thought about the
nvme_core.default_ps_max_latency_us
parameter as well. Not sure if that sort of doing the same thing?! Probably a bit different but might have the same outcome?! The parameter is mentioned in that ARCH Linux wiki I posted below already. @Duncan Would you please test this kernel parameter for us on that problematic laptop? Go to the host’s settings in the web UI and setnvme_core.default_ps_max_latency_us=0
as Kernel Parameter but using the default kernel (4.15.x). See what speed you get. As well trynvme_core.default_ps_max_latency_us=5500
(as described in the wiki) also using default kernel. Thanks!
- It’s a good idea to provide a post init script right now for people to test. I am not exactly sure what part is doing it. I think it’s