Re: [PATCH 1/2] KVM: Block memslot updates across range_start() and range_end()
From: Sean Christopherson
Date: Mon Aug 02 2021 - 14:30:25 EST
On Tue, Jul 27, 2021, Paolo Bonzini wrote:
> @@ -764,8 +769,9 @@ static inline struct kvm_memslots *__kvm_memslots(struct kvm *kvm, int as_id)
> {
> as_id = array_index_nospec(as_id, KVM_ADDRESS_SPACE_NUM);
> return srcu_dereference_check(kvm->memslots[as_id], &kvm->srcu,
> - lockdep_is_held(&kvm->slots_lock) ||
> - !refcount_read(&kvm->users_count));
> + lockdep_is_held(&kvm->slots_lock) ||
> + READ_ONCE(kvm->mn_active_invalidate_count) ||
Hmm, I'm not sure we should add mn_active_invalidate_count as an exception to
holding kvm->srcu. It made sense in original (flawed) approach because the
exception was a locked_is_held() check, i.e. it was verifying the the current
task holds the lock. With mn_active_invalidate_count, this only verifies that
there's an invalidation in-progress, it doesn't verify that this task/CPU is the
one doing the invalidation.
Since __kvm_handle_hva_range() takes SRCU for read, maybe it's best omit this?
> + !refcount_read(&kvm->users_count));
> }
>
> static inline struct kvm_memslots *kvm_memslots(struct kvm *kvm)
...
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index 5cc79373827f..c64a7de60846 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -605,10 +605,8 @@ static void kvm_mmu_notifier_change_pte(struct mmu_notifier *mn,
>
> /*
> * .change_pte() must be surrounded by .invalidate_range_{start,end}(),
Nit, the comma can be switch to a period. The next patch starts a new sentence,
so it would be correct even in the long term.
> - * and so always runs with an elevated notifier count. This obviates
> - * the need to bump the sequence count.
> */
> - WARN_ON_ONCE(!kvm->mmu_notifier_count);
> + WARN_ON_ONCE(!READ_ONCE(kvm->mn_active_invalidate_count));
>
> kvm_handle_hva_range(mn, address, address + 1, pte, kvm_set_spte_gfn);
> }
Nits aside,
Reviewed-by: Sean Christopherson <seanjc@xxxxxxxxxx>