Re: [PATCH v2 3/3] KVM: x86/mmu: Defer allocation of shadow MMU's hashed page list
From: Sean Christopherson
Date: Tue Apr 15 2025 - 17:53:20 EST
On Tue, Apr 15, 2025, Vipin Sharma wrote:
> On 2025-04-01 08:57:14, Sean Christopherson wrote:
> > +static __ro_after_init HLIST_HEAD(empty_page_hash);
> > +
> > +static struct hlist_head *kvm_get_mmu_page_hash(struct kvm *kvm, gfn_t gfn)
> > +{
> > + struct hlist_head *page_hash = READ_ONCE(kvm->arch.mmu_page_hash);
> > +
> > + if (!page_hash)
> > + return &empty_page_hash;
> > +
> > + return &page_hash[kvm_page_table_hashfn(gfn)];
> > +}
> > +
> >
> > @@ -2357,6 +2368,7 @@ static struct kvm_mmu_page *__kvm_mmu_get_shadow_page(struct kvm *kvm,
> > struct kvm_mmu_page *sp;
> > bool created = false;
> >
> > + BUG_ON(!kvm->arch.mmu_page_hash);
> > sp_list = &kvm->arch.mmu_page_hash[kvm_page_table_hashfn(gfn)];
>
> Why do we need READ_ONCE() at kvm_get_mmu_page_hash() but not here?
We don't (need it in kvm_get_mmu_page_hash()). I suspect past me was thinking
it could be accessed without holding mmu_lock, but that's simply not true. Unless
I'm forgetting, something, I'll drop the READ_ONCE() and WRITE_ONCE() in
kvm_mmu_alloc_page_hash(), and instead assert that mmu_lock is held for write.
> My understanding is that it is in kvm_get_mmu_page_hash() to avoid compiler
> doing any read tear. If yes, then the same condition is valid here, isn't it?
The intent wasn't to guard against a tear, but to instead ensure mmu_page_hash
couldn't be re-read and end up with a NULL pointer deref, e.g. if KVM set
mmu_page_hash and then nullfied it because some later step failed. But if
mmu_lock is held for write, that is simply impossible.