On Tue, Jul 23, 2019 at 03:53:06PM +0800, Jason Wang wrote:
On 2019/7/23 äå3:23, Michael S. Tsirkin wrote:So invalidate callbacks need to reset the map, and they do
See the discussion that followed. Basically no, it's good enoughReally let's just use kfree_rcu. It's way cleaner: fire and forget.Looks not, you need rate limit the fire as you've figured out?
already and is only going to be better.
And in fact,Let's try to figure it out in the mail first. I'm pretty sure the
the synchronization is not even needed, does it help if I leave a comment to
explain?
current logic is wrong.
Here is what the code what to achieve:
- The map was protected by RCU
- Writers are: MMU notifier invalidation callbacks, file operations (ioctls
etc), meta_prefetch (datapath)
- Readers are: memory accessor
Writer are synchronized through mmu_lock. RCU is used to synchronized
between writers and readers.
The synchronize_rcu() in vhost_reset_vq_maps() was used to synchronized it
with readers (memory accessors) in the path of file operations. But in this
case, vq->mutex was already held, this means it has been serialized with
memory accessor. That's why I think it could be removed safely.
Anything I miss here?
not have vq mutex. How can they do this and free
the map safely? They need synchronize_rcu or kfree_rcu right?
And I worry somewhat that synchronize_rcu in an MMU notifier
is a problem, MMU notifiers are supposed to be quick:
they are on a read side critical section of SRCU.
If we could get rid of RCU that would be even better.
But now I wonder:
invalidate_start has to mark page as dirty
(this is what my patch added, current code misses this).
at that point kernel can come and make the page clean again.
At that point VQ handlers can keep a copy of the map
and change the page again.
At this point I don't understand how we can mark page dirty
safely.
Yes, expedited is not good for something guest can trigger.Ok, this looks like that I have some misunderstanding here of the reason whyThen it's going to be slow. Why do we care?What if guest manages to escape to qemu?Btw, for kvm ioctl it still uses synchronize_rcu() in kvm_vcpu_ioctl(),AFAIK these never run in response to guest events.
(just a little bit more hard to trigger):
So they can take very long and guests still won't crash.
Thanks
What we do not want is synchronize_rcu that guest is blocked on.
synchronize_rcu() is not preferable in the path of ioctl. But in kvm case,
if rcu_expedited is set, it can triggers IPIs AFAIK.
Thanks
Let's just use kfree_rcu if we can. Paul said even though
documentation still says it needs to be rate-limited, that
part is basically stale and will get updated.