Slow Unicast Deploy on new Machines
-
Another day, another data point:
I’ve copied the decompressed image file to an external USB3 SSD (over NFS, at 100+MB/s with rsync), and while in the debug-deploy shell, I ran partclone using the SSD as the source.
The partclone session started out fast, but like with the NFS-based sessions, after about 5%, started to slow down. By 7%, transfer speed from SSD was at around 800MB/min, and by 10% was at 600MB/min.
/var/log/partclone.log showed similar write fragmentation patterns to what I posted last night.
I’m going to look next at kernel tunables to see if there are any io buffers I can set to be larger.
-
On a partclone mailing list a test to determine if write io was a bottleneck was mentioned: restore to /dev/null.
I tried that, and got a solid 13GB/min from the SSD and 7.3GB/min from the NFS share.
This tells me write performance to the m.2 drive is probably the culprit.
Any kernel parameters I should be looking at? I will be doing a diff between sysctl -a on an Ubuntu 18.04 machine and the FOS client kernel.
-
@tomierna It might be related to the kernel driver itself. You’ve done a hell of a lot of debugging here and I’ve lost a bit of where you are in the process.
What kernel version works in ubuntu?
In ubuntu, if you run
lspci -NN
it might show you the device and the kernel driver being used. Thinking about it, it might not show you the disk controller if its not connected via the internal pcie buss.And if booted into a fog debug session and then mounted the m.2 drive (/dev/sda1) over /mnt then ran this dd command.
dd if=/dev/zero of=/mnt/test1.img bs=1G count=1 oflag=direct
What throughput does it provide. This is just writing zeros to the test1.img file as fast as it can. -
@george1421 - I’ve restored an image over the Ubuntu install, but I will try a live boot and see if I can do the lspci command from there.
Re: write speed to the m.2 SSD within the FOS debug session:
dd if=/dev/zero of=./test1.img bs=1G count=1 oflag=direct 1+0 records in 1+0 records out 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 1.40934 s, 762 MB/s dd if=/dev/zero of=./test1.img bs=2G count=16 oflag=direct iflag=fullblock 16+0 records in 16+0 records out 34359738368 bytes (34 GB, 32 GiB) copied, 159.576 s, 215 MB/s
A larger file is slower, but still way faster than GbE speeds.
-
@tomierna So this shows us that the target computer can create files faster than ethernet net. So then if you mounted the fog servers /images/dev via nfs from the target computer (debug session). What rates do you get? (trying the divide and concur method). Again this is under FOS. If network rates are normal, then the slowness might be partclone or gzip/zstd slowing things down.
[edit] I’m not sure if this will really tell us anything since you can upload at normal speed. So it will probably add no value to test [/edit]
-
@george1421 I’ve already copied via NFS with rsync to the internal m.2 drive, at GbE speeds.
I also excluded pigz and cat by pre-decompressing the image and trying the partclone.restore command from the command line.
[edit]I just did another test, while I was doing 6 unicast t410i machines at the same time, and rsync to the internal m.2 drive from NFS was getting 60MB/second while each of the unicasts were doing 5.5GB/min (91MB/s). About halfway through the rsync, some of the unicasts finished, and the rsync speed took up the bandwidth, peaking at 110MB/sec.[/edit]
-
@tomierna Well from the sounds of it, you really don’t have a problem do you??
All of the bits work perfectly, just not together.
-
@george1421 LOL, yeah. Super frustrating.
It really does seem like an interaction between partclone.restore and the m.2 ssd (or maybe the FOS kernel’s support of that device).
Right now I’m running a partclone.restore from the command line of a debug deploy from the NFS share to the external USB3 SSD I’ve got connected. Solid 7.3GB/min.
Tomorrow I will try booting from Ubuntu Live and install Partclone, and see if the same problem exists there, and maybe that will show what part of the nvm subsystem needs tweaking in the FOS kernel.
-
@george1421 I’ve tested partclone over NFS to m.2 under Ubuntu 18.04 now.
The exact same issue is happening there with partclone.
I ran partclone.restore to /dev/null, from the FOG NFS images share to get a non-writing baseline of network performance, and it showed 6.8GB/min.
Then I ran partclone.restore to the m.2 drive, and it started at 14GB/min, and by 4% it was down to 2GB/min. By 50% it was down to 450MB/min.
The /var/log/partclone.log showed multiple writes per buffer, like I outlined in another post.
I guess it’s time for me to post in the partclone forums?
-
@tomierna You are doing a great job! Please keep us posted.
-
@sebastian-roth Thank you, Sebastian.
This is getting weirder by the day.
I went back to the Ubuntu test machine today to try and look for differences, and partclone.restore from NFS to the m.2 SSD ran at expected speeds!
Going back through my shell history, I noticed I had never unmounted the partition I was cloning onto.
So, after the restore completed, I unmounted the partition and ran the partclone.restore again. Boom, slow.
Then remounted, re-ran command, boom, fast again.
I did this a few more times to make sure I wasn’t seeing things, but sure enough, on the Ubuntu machine, when the target partition is mounted, partclone.restore writes at GbE speeds. When the target partition is not mounted, the restore speed falls to about 450MB/min.
I tried this on the FOG Client machine, but partclone exits because it knows the partition is mounted.
Thinking this might be due to the partclone version 0.2.89 on FOS, I copied over the 0.3.11 binaries and libraries.
This allowed it to run the clone despite the partition being mounted, but it was still slow.
I looked back at the history on the Ubuntu machine, and the FS I had mounted the first time I had a fast restore was ext4. Subsequent times it was NTFS (from the image).
So, I did an mkfs.ext4 on the partition on the FOS machine, mounted it, and ran the partclone. IT RAN AT GbE SPEEDS!!!
However, subsequent unmount/remount did not allow another restore to run quickly. I’m just about to test formatting as ext2 and trying the restore with that mounted to see if it matters which FS.
-
So apparently on the Ubuntu machine, as long as the partition is mounted, a restore is fast.
On the FOS Client, the partition has to be formatted as a FS other than NTFS and mounted.
I’m too far down the rabbit hole to see how this makes any sense.
-
@tomierna I see you’re doing a ton of research trying to narrow down the problem, but i have to agree that none of this makes sense, and seems to be specific to the m.2 SSDs. Why would it being mounted matter anyway? (I’m not expecting you to know the answer, nor do I know it lol)
-
@tom-elliott I’m pretty stumped myself.
And why does it matter on the FOS Client that it is not NTFS? Fuse NTFS version differences between FOS and Ubuntu?
-
We bought 50 of these machines and one arrived with a cracked screen. I just received the replacement from the RMA of that broken machine, and of course it images at full speed.
The replacement machine came with a Samsung m.2 drive, part: MZVLW256HEHP-000L7
The other 49 machines have the Lenovo equivalent: LENSE20256GMSP34MEAT2TA
I’ve contacted my Lenovo rep with the hopes that I can work with an engineer to narrow down a fix.
-
@tomierna Thank you for providing feedback on this issue.
I wonder if you purchased a backup samsung m.2 drive and field upgraded a second one of these systems to see if it IS the m.2 drive at fault. The other option is that they had a firmware/hardware modification mid production run that correct the issue(??)
-
@george1421 I might just try that, just for troubleshooting purposes.
Re: firmware - There is a BIOS update for the machines, and a firmware update for the NVM Samsung drive. Sadly, trying these was my first troubleshooting step (not listed here, because it was before I suspected components of FOG). I sure was holding my breath that it was the drive firmware though!