Re: [PATCH V5 0/9] Fixes for vhost metadata acceleration

From: Jason Wang
Date: Tue Aug 13 2019 - 04:31:18 EST



On 2019/8/12 äå9:02, Jason Gunthorpe wrote:
On Mon, Aug 12, 2019 at 05:49:08AM -0400, Michael S. Tsirkin wrote:
On Mon, Aug 12, 2019 at 10:44:51AM +0800, Jason Wang wrote:
On 2019/8/11 äå1:52, Michael S. Tsirkin wrote:
On Fri, Aug 09, 2019 at 01:48:42AM -0400, Jason Wang wrote:
Hi all:

This series try to fix several issues introduced by meta data
accelreation series. Please review.

Changes from V4:
- switch to use spinlock synchronize MMU notifier with accessors

Changes from V3:
- remove the unnecessary patch

Changes from V2:
- use seqlck helper to synchronize MMU notifier with vhost worker

Changes from V1:
- try not use RCU to syncrhonize MMU notifier with vhost worker
- set dirty pages after no readers
- return -EAGAIN only when we find the range is overlapped with
metadata

Jason Wang (9):
vhost: don't set uaddr for invalid address
vhost: validate MMU notifier registration
vhost: fix vhost map leak
vhost: reset invalidate_count in vhost_set_vring_num_addr()
vhost: mark dirty pages during map uninit
vhost: don't do synchronize_rcu() in vhost_uninit_vq_maps()
vhost: do not use RCU to synchronize MMU notifier with worker
vhost: correctly set dirty pages in MMU notifiers callback
vhost: do not return -EAGAIN for non blocking invalidation too early

drivers/vhost/vhost.c | 202 +++++++++++++++++++++++++-----------------
drivers/vhost/vhost.h | 6 +-
2 files changed, 122 insertions(+), 86 deletions(-)
This generally looks more solid.

But this amounts to a significant overhaul of the code.

At this point how about we revert 7f466032dc9e5a61217f22ea34b2df932786bbfc
for this release, and then re-apply a corrected version
for the next one?

If possible, consider we've actually disabled the feature. How about just
queued those patches for next release?

Thanks
Sorry if I was unclear. My idea is that
1. I revert the disabled code
2. You send a patch readding it with all the fixes squashed
3. Maybe optimizations on top right away?
4. We queue *that* for next and see what happens.

And the advantage over the patchy approach is that the current patches
are hard to review. E.g. it's not reasonable to ask RCU guys to review
the whole of vhost for RCU usage but it's much more reasonable to ask
about a specific patch.
I think there are other problems here too, I don't like that the use
of mmu notifiers is so different from every other driver, or that GUP
is called under spinlock.


What kind of issues do you see? Spinlock is to synchronize GUP with MMU notifier in this series.

Btw, back to the original question. May I know why synchronize_rcu() is not suitable? Consider:

- MMU notifier are allowed to sleep
- MMU notifier could be preempted

If you mean something that prevents RCU grace period from running. I'm afraid MMU notifier is not the only victim. But it should be no more worse than some one is holding a lock for very long time. If the only concern is the preemption of vhost kthread, I can switch to use rcu_read_lock_bh() instead.

Thanks



So I favor the revert and try again approach as well. It is hard to
get a clear picture with these endless bug fix patches

Jason


Ok.

Thanks