Re: [PATCH v7 08/14] KVM: Rename mmu_notifier_*

From: Sean Christopherson
Date: Wed May 24 2023 - 16:16:17 EST


On Wed, May 24, 2023, Kautuk Consul wrote:
> On 2023-05-23 07:19:43, Sean Christopherson wrote:
> > On Tue, May 23, 2023, Kautuk Consul wrote:
> > > On 2022-07-06 16:20:10, Chao Peng wrote:
> > > > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> > > > index e9153b54e2a4..c262ebb168a7 100644
> > > > --- a/include/linux/kvm_host.h
> > > > +++ b/include/linux/kvm_host.h
> > > > @@ -765,10 +765,10 @@ struct kvm {
> > > >
> > > > #if defined(CONFIG_MMU_NOTIFIER) && defined(KVM_ARCH_WANT_MMU_NOTIFIER)
> > > > struct mmu_notifier mmu_notifier;
> > > > - unsigned long mmu_notifier_seq;
> > > > - long mmu_notifier_count;
> > > > - gfn_t mmu_notifier_range_start;
> > > > - gfn_t mmu_notifier_range_end;
> > > > + unsigned long mmu_updating_seq;
> > > > + long mmu_updating_count;
> > >
> > > Can we convert mmu_updating_seq and mmu_updating_count to atomic_t ?
> >
> > Heh, can we? Yes. Should we? No.
> >
> > > I see that not all accesses to these are under the kvm->mmu_lock
> > > spinlock.
> >
> > Ya, working as intended. Ignoring gfn_to_pfn_cache for the moment, all accesses
> > to mmu_invalidate_in_progress (was mmu_notifier_count / mmu_updating_count above)
> > are done under mmu_lock. And for for mmu_notifier_seq (mmu_updating_seq above),
> > all writes and some reads are done under mmu_lock. The only reads that are done
> > outside of mmu_lock are the initial snapshots of the sequence number.
> >
> > gfn_to_pfn_cache uses a different locking scheme, the comments in
> > mmu_notifier_retry_cache() do a good job explaining the ordering.
> >
> > > This will also remove the need for putting separate smp_wmb() and
> > > smp_rmb() memory barriers while accessing these structure members.
> >
> > No, the memory barriers aren't there to provide any kind of atomicity. The barriers
> > exist to ensure that stores and loads to/from the sequence and invalidate in-progress
> > counts are ordered relative to the invalidation (stores to counts) and creation (loads)
> > of SPTEs. Making the counts atomic changes nothing because atomic operations don't
> > guarantee the necessary ordering.
> I'm not saying that the memory barriers provide atomicity.
> My comment was based on the assumption that "all atomic operations are
> implicit memory barriers". If that assumption is true then we won't need
> the memory barriers here if we use atomic operations for protecting
> these 2 structure members.

Atomics aren't memory barriers on all architectures, e.g. see the various
definitions of smp_mb__after_atomic().

Even if atomic operations did provide barriers, using an atomic would be overkill
and a net negative. On strongly ordered architectures like x86, memory barriers are
just compiler barriers, whereas atomics may be more expensive. Of course, the only
accesses outside of mmu_lock are reads, so on x86 that "atomic" access is just a
READ_ONCE() load, but that's not the case for all architectures.

Anyways, the point is that atomics and memory barriers are different things that
serve different purposes.