On Tue, Jul 23, 2019 at 09:34:29PM +0800, Jason Wang wrote:
On 2019/7/23 äå6:27, Michael S. Tsirkin wrote:So what orders __get_user_pages_fast wrt invalidate_count read?
Yes, since there could be multiple co-current invalidation requests. We needI don't think this helps at all.
count them to make sure we don't pin wrong pages.
I also wonder about ordering. kvm has this:I'm not familiar with kvm MMU internals, but we do everything under of
/*
* Used to check for invalidations in progress, of the pfn that is
* returned by pfn_to_pfn_prot below.
*/
mmu_seq = kvm->mmu_notifier_seq;
/*
* Ensure the read of mmu_notifier_seq isn't reordered with PTE reads in
* gfn_to_pfn_prot() (which calls get_user_pages()), so that we don't
* risk the page we get a reference to getting unmapped before we have a
* chance to grab the mmu_lock without mmu_notifier_retry() noticing.
*
* This smp_rmb() pairs with the effective smp_wmb() of the combination
* of the pte_unmap_unlock() after the PTE is zapped, and the
* spin_lock() in kvm_mmu_notifier_invalidate_<page|range_end>() before
* mmu_notifier_seq is incremented.
*/
smp_rmb();
does this apply to us? Can't we use a seqlock instead so we do
not need to worry?
mmu_lock.
Thanks
There's no lock between checking the invalidate counter and
get user pages fast within vhost_map_prefetch. So it's possible
that get user pages fast reads PTEs speculatively before
invalidate is read.
--
In vhost_map_prefetch() we do:
ÂÂÂÂÂÂÂ spin_lock(&vq->mmu_lock);
    ...
ÂÂÂÂÂÂÂ err = -EFAULT;
ÂÂÂÂÂÂÂ if (vq->invalidate_count)
ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ goto err;
ÂÂÂÂÂÂÂ ...
ÂÂÂÂÂÂÂ npinned = __get_user_pages_fast(uaddr->uaddr, npages,
ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ uaddr->write, pages);
ÂÂÂÂÂÂÂ ...
ÂÂÂ ÂÂÂ spin_unlock(&vq->mmu_lock);
Is this not sufficient?
Thanks