Re: [PATCH 09/28] KVM: x86/mmu: Require mmu_lock be held for write in unyielding root iter

From: Sean Christopherson
Date: Mon Nov 22 2021 - 15:20:01 EST


On Mon, Nov 22, 2021, Ben Gardon wrote:
> > + * Holding mmu_lock for write obviates the need for RCU protection as the list
> > + * is guaranteed to be stable.
> > + */
> > +#define for_each_tdp_mmu_root(_kvm, _root, _as_id) \
> > + list_for_each_entry(_root, &_kvm->arch.tdp_mmu_roots, link) \
> > if (kvm_mmu_page_as_id(_root) != _as_id) { \
> > + lockdep_assert_held_write(&(_kvm)->mmu_lock); \
>
> Did you mean for this lockdep to only be hit in this uncommon
> non-matching ASID case?

Yes and no. Yes, I intended what I wrote. No, this isn't intended to be limited
to a memslot address space mismatch, but at the time I wrote this I was apparently
lazy or inept :-)

In hindsight, this would be better:

/* blah blah blah */
static inline struct list_head *kvm_get_tdp_mmu_roots_exclusive(struct kvm *kvm)
{
lockdep_assert_held_write(&kvm->mmu_lock);

return &kvm->arch.tdp_mmu_roots;
}

#define for_each_tdp_mmu_root(_kvm, _root, _as_id) \
list_for_each_entry(_root, kvm_get_tdp_mmu_roots_exclusive(kvm), link) \
if (kvm_mmu_page_as_id(_root) != _as_id) { \
} else