Performance decrease using Hyper-V Win10 clients
-
I’ve solved this thread as we now know this is a bug in the kernel and not a bug due to FOG. Of course we can still document stuff here.
-
@Tom-Elliott Sorry for not seeing this sooner. PAGE_SIZE is defined as 4096, so the mask is being set to 4095, which is the same value that iscsi_iser.c uses (~MASK_4K).
From the notes in LIS, I suspect that setting blk_queue_virt_boundary is supposed to insure that there are no gaps in th sg list, but they are still present, so the bounce buffer needs to be put back in place or the gaps need to be eliminated elsewhere.
The patch author responded this morning and is looking into the slowdown report. I’ll post any updates as I hear them.
-
Any update on this?
-
@jkozee Yes, please let us know if you have any news on this! As well I’d be interested in general information on using FOG with Hyper-V! I started to work on improving the wiki documentation and it would be great if you would put in your knowledge on this topic.
-
No resolution on this issue yet. (One of?) the author of the patch has confirmed the behavior and is investigating a kernel solution that doesn’t re-introduce the bounce buffers. No indication on how long this might take.
-
@Sebastian-Roth Sure, I’ll help out if I can. Do you have links to the wiki pages you’re working on?
-
@jkozee Awesome! Good to hear that you got a confirmation on this… please keep us posted. As well I sent you a chat message about the wiki stuff. Thanks!
-
Just tagging this once again. I realize there’s been 5-6 months of “quiet” on this, but any news yet? My patch, as far as I can tell, isn’t working so wondering if there was any progress on the status.
-
Replying to this topic as I too have seen a severe (at least in my eyes) degradation to the speed of resizing.
While my original patch work was just a guess as to a problem, I decided to go outside of my own train of thought and followed, (I think) more specifically in regards to the 4096 rule.
While I have no idea what the real page_size will be, it would seem to me that this scsi storage control is intended more for the nvme and potentially the virtual scsi spaces. On this idea, I decided to have the page essentially run:
Adjusted patch work:
if (PAGE_SIZE - 1 < 4096) { blk_queue_virt_boundary(sdevice->request_queue, 4096); } else { blk_queue_virt_boundary(sdevice->request_queue, PAGE_SIZE - 1); }
Where my original patch work was:
if (PAGE_SIZE - 1 < 0) { blk_queue_virt_boundary(sdevice->request_queue, 0); } else { blk_queue_virt_boundary(sdevice->request_queue, PAGE_SIZE - 1); }
In the original patch, I never really saw an improvement in speed and chalked it up to NTFS just being a pain, or VMWare. It was off this idea that while I had a patch in place, it wasn’t really helping or hurting anything.
To give some scope. Using the default file or the patched up file a VMWare system with Windows XP 50GB started taking nearly 2 minutes to resize (and the device was being resized 10 fold (50 gb to about 5 gb)) so I figured, meh not too bad I suppose. As this thread specifies Hyper-V I wasn’t focused on VMWare and just assumed my slow issues was due to VMWare itself, or the way the disk was laid out. (BOY WAS I WRONG).
I decided to see if I could do anything to speed up the NTFS resize and thought about this thread for a bit. Throwing the whole idea of the original patch I tried out the window and just thinking, hmm what items would really be impacted by this from what I have seen, I thought about NVMe potentially (4k), and the SCSI volumes typically used by VM’s (Hyper-V or VMWare (possibly others)). So on the idea the NVMe is far more important I just decided to use 4096 as the base page_size. Using the now “new” patch the Same system being imaged only takes about 10 seconds of resize.
So I don’t know who we need to report this too (as I’m pretty sure my assumptions aren’t very nice) but it is very much something in this blk_queue_virt_boundary thing.
-
@jkozee @Tom-Elliott Sorry for bringing up such an old topic again. Working on moving towards the new 5.10.x kernel I was looking at the patches we still apply to our kernel. Most are part of the upstream kernel but not the fix discussed in this topic.
Though the kernel code has changed a bit and I am wondering if we’d still see the slowness without our fix? Would anyone of you be able to replicate the issue with a 5.10.x kernel (with and without fix)?
Searching the web a little more I stumbled upon this patch that made it into the official kernel not long ago: https://patchwork.kernel.org/project/linux-input/patch/20200910143455.109293-12-boqun.feng@gmail.com/
Not sure but could play a role in this case. Anyway it would be great to see if the issue can still be replicated with the newer kernel - without fix.