Very slow cloning speed on specific model
-
@darkxeno @dylan123 @Middle @oleg-knysh Build is done. Anyone keen to test?
sudo -i cd /var/www/html/fog/service/ipxe wget https://fogproject.org/kernels/bzImage-4.9.51 wget https://fogproject.org/inits/init-4.9.x.xz chown apache:apache bzImage-4.9.51 init-4.9.x.xz
-
@Quazz said:
edit2: On the note of pigz, Buildroot provides a version these days, so we could remove the manually specified package unless we are attached to version 2.3.4 for some reason
Don’t think we are bound to use 2.3.4 - not that I know of. I just pushed on to buildroot 2019.04.8 and their official pigz version 2.4.
-
I’m noticing an issue as well. We just received some HP 840 G6 laptops and they came pre-loaded with bios version 01.03.00 and I was able to image them just fine. But then I decided to upgrade one of them to bios version 01.03.04 before imaging and now the imaging process is super slow. It also looks like bios version 01.03.00 isn’t available to download. So now I’m stuck waiting for a bios update I guess. I’ve talked to HP chat and they didn’t have 01.03.00 bios available to download.
-
@bberret said in Very slow cloning speed on specific model:
I’ve talked to HP chat and they didn’t have 01.03.00 bios available to download.
Probably good to keep on asking HP (email, telephone, …) about this to make them aware of the problem.
-
@bberret Will you try something for us? In the fog configuration -> fog settings -> general settings page there is a KERNEL ARGS parameter . Will you place this value in that field
nvme_core.default_ps_max_latency_us=0
and then try imaging again. You may see a warning about that variable during imaging, but ignore it. It is a spurious error that is fixed in the yet unreleased FOG v1.5.8. This settings tells the nvme drive to not go into low power mode during imaging. -
@george1421 Thanks for the quick reply. I tried what you suggested and didn’t seem to help. And now my theory with it being an issue with only bios version 01.03.00 doesn’t seem to be holding true. Just unboxed another laptop and it seems to be having that exact same issue. This issue isn’t just slow imaging, it is slow loading pretty much everything after it loads ipxe.efi file. when its trying to load bzImage file (which usually takes 1 second) it is taking 15 minutes before that file is loaded.
-
@bberret So two changes (test) come to mind.
-
Does this computer have a bios mode? If so, as a test change it to bios mode to see if the bzImage and init.xz transfer speeds are normal.
-
Do you have an add on card (pci-e) that has pxe booting capabilities. The idea is to remove the onboard pxe firmware and network card. See if bzImage transfer is normal or not.
-
I thought about booting FOS linux from a usb stick, but you are telling me that imaging is not fast too. If imaging was fast but pxe booting was slow then I might point to iPXE as being a problem. But in this case booting for a usb stick will not mask/test the issue.
Off the top of my head I would either say network adapter or pxe / uefi firmware.
-
-
@george1421 one option for eliminating the network card as the source of the problem (not completely, but mostly) is to boot with a usb adapter plugged in as well, both plugged into the network, and after ipxe loads unplug the network from the built in adapter.
-
@Sebastian-Roth said in Very slow cloning speed on specific model:
@darkxeno @dylan123 @Middle @oleg-knysh Build is done. Anyone keen to test?
sudo -i cd /var/www/html/fog/service/ipxe wget https://fogproject.org/kernels/bzImage-4.9.51 wget https://fogproject.org/inits/init-4.9.x.xz chown apache:apache bzImage-4.9.51 init-4.9.x.xz
Thanks @Sebastian-Roth , I’ve just got back from leave. Ended up just manually setting up the device and don’t have another one to test with unfortunately. If I do, I’ll give this a test and see if it makes a difference. Thanks again for your assistance.
-
@Sebastian-Roth said in Very slow cloning speed on specific model:
@darkxeno @dylan123 @Middle @oleg-knysh Build is done. Anyone keen to test?
sudo -i cd /var/www/html/fog/service/ipxe wget https://fogproject.org/kernels/bzImage-4.9.51 wget https://fogproject.org/inits/init-4.9.x.xz chown apache:apache bzImage-4.9.51 init-4.9.x.xz
Sorry for the late update. This didn’t change anything I’m afraid. I’ve been a little reluctant in updating during testing as the results I’ve had have been very inconsistent. I can’t explain it, but the first deployment of the day works without issues - it’s happened too many times now to be a coincidence.
I’ve currently changed back to the master branch to clean-up the server of the test kernels/inits we’ve been using. The only consistent deploy I can get is using the init_partclone.xy that @Quazz posted on the 5th Dec, entering debug mode and running the following:
nvme set-feature -f 0x0c -v=0 /dev/nvme0
This works every time. The average transfer speed is around 3GB/min, rather than >10GB/min that I get from the master branch build when it randomly works, but that’s good enough for us as the result is consistent and the image is small anyway.
This is google drive link Quazz provided for the init: https://drive.google.com/open?id=1u_HuN5NSpzb7YmQBAsrzDELteNmlWUWU
I’ve had a look at the post init script option to see if I can automate this rather than entering debug, but I’m not really sure what I’m doing here.
-
@Middle Can you try the KERNEL ARGS George suggested?
nvme_core.default_ps_max_latency_us=0
-
@Quazz We’ve tried that as well but doesn’t help. I think it’s also included in the Dev branch by default now which we’re tried.
-
@Middle That’s really unfortunate. We were hoping that would be sufficient for all usecases, but unfortunately it seems that in some cases that’s not true.
We also can’t disable it globally since that would hinder performance for a lot of users that don’t have this issue…
You can try setting the command in postinit scripts. (/images/dev/postinitscripts/fog.postinit)
The command should execute after the init has loaded and before FOG starts its magic.
-
@Quazz That’s great - setting the nvme command in the fog.postinit file and making the init-partclone.xz you created the system wide default means we can now just pxe boot and select deploy image without issues. Huge improvement. Many thanks.
Edit: for clarification, it’s the nvme set-feature -f 0x0c -v=0 /dev/nvme0 command I’m using, not the latency one.
-
@Middle said in Very slow cloning speed on specific model:
nvme set-feature -f 0x0c -v=0 /dev/nvme0
so the solution was to put that under fog general settings?? I tried and it didn’t work on my end.
I’m also facing the same issue. I have 5 hp 840 g6 and only 4 of them has this issue. One of them does not have this issue. They’re all identical laptop in hardware and software. even the bios is the same.
I can fog to the rest of my 800 computers perfectly fine at normal speed. It’s just these new G6.
disk drive: Samsung MZVLB256HAHQ-000H1
NIC: Intel Ethernet I219-LM
bios ver: R70 Ver. 01.03.04 11/06/2019FOG server: 1.5.6 on ubuntu 18.04.3 lts
-
@nrg The following is based on using Fog 1.5.7 master branch on CentOS.
Are you using the updated init_partclone.xz? Here’s the link:
https://drive.google.com/open?id=1u_HuN5NSpzb7YmQBAsrzDELteNmlWUWU
You need to copy this to /var/www/html/fog/service/ipxe and then run
chown apache:apache init_partclone.xz
to update file permissions.We only have G6 laptops for imaging now, so I made this the default init under Fog Config > Fog Settings > TFTP Server > PXE BOOT IMAGE > enter init_partclone.xz. Alternatively, if you register hosts, you can add it to the host init section for just the G6 laptops.
You then have to edit the file /images/dev/postinitscripts/fog.postinit and add the line
nvme set-feature -f 0x0c -v=0 /dev/nvme0
-
We have had some success with using the snponly.efi boot file instead of ipxe.efi. Also updating to fog version 1.5.7.100 might have helped as well. I did notice today HP came out with a newer bios verison for the HP 840 G6 model. Bios version 01.04.02 which seems to have a lot of bug fixes. I might give this a try tomorrow but I figured someone else might want to try it out before me and let me know how it goes.
-
@darkxeno We have the same with Dell Ultra 7090
Updated the Kernal latest version of FOG stable
All our other Machines image in a few mins with SSD’s in them HDD’s 10 minsThese new ones take hrs which is not right
Doing one now and its like 200-300 mbps normally running at 1gigTested all our cables and like earlier other older machines like Latitude 5480 5490 5400
Desktops Opitplex 3020 3010 3050 3060 are all fine -
@itsupport This topic is fairly old and huge! No idea if this even talked about the DELL Ultra 7090 you seem to see an issue with. Please open your own topic and post details there (exact FOG and kernel versions and so on). Always good to post a link as reference to this topic as well.
-
@itsupport Hello, I was curious if a solution was ever discovered for fixing the slow imaging when working with Dell 7090 Ultras? We have recently purchased this model to refresh our labs, however we can’t get the image to push to the PCs. We are running FOG 1.5.9 and I just updated the Kernel to 5.10.34 TomElliot 64.
bzImage Version: 5.10.34
bzImage32 Version: 5.10.34It seems to hang up for a while on “Restoring Partition Tables (GPT_)”
Any help would be greatly appreciated! Thank you!