Re: [PATCH v2 1/6] KVM: x86/mmu: add a new mmu zap helper to indicate memtype changes
From: Yan Zhao
Date: Thu May 25 2023 - 06:39:24 EST
On Wed, May 24, 2023 at 07:50:24AM -0700, Sean Christopherson wrote:
> On Wed, May 24, 2023, Yan Zhao wrote:
> > On Tue, May 23, 2023 at 03:51:49PM -0700, Sean Christopherson wrote:
> > > diff --git a/arch/x86/kvm/mtrr.c b/arch/x86/kvm/mtrr.c
> > > index 3eb6e7f47e96..a67c28a56417 100644
> > > --- a/arch/x86/kvm/mtrr.c
> > > +++ b/arch/x86/kvm/mtrr.c
> > > @@ -320,7 +320,7 @@ static void update_mtrr(struct kvm_vcpu *vcpu, u32 msr)
> > > struct kvm_mtrr *mtrr_state = &vcpu->arch.mtrr_state;
> > > gfn_t start, end;
> > >
> > > - if (!tdp_enabled || !kvm_arch_has_noncoherent_dma(vcpu->kvm))
> > > + if (!kvm_mmu_honors_guest_mtrrs(vcpu->kvm))
> > Could we also add another helper kvm_mmu_cap_honors_guest_mtrrs(), which
> > does not check kvm_arch_has_noncoherent_dma()?
> >
> > +static inline bool kvm_mmu_cap_honors_guest_mtrrs(struct kvm *kvm)
> > +{
> > + return !!shadow_memtype_mask;
> > +}
> >
> > This is because in patch 4 I plan to do the EPT zap when
> > noncoherent_dma_count goes from 1 to 0.
>
> Hrm, the 1->0 transition is annoying. Rather than trying to capture the "everything
> except non-coherent DMA" aspect, what about this?
>
> mmu.c:
>
> bool __kvm_mmu_honors_guest_mtrrs(struct kvm *kvm, bool vm_has_noncoherent_dma)
> {
> /*
> * If the TDP is enabled, the host MTRRs are ignored by TDP
> * (shadow_memtype_mask is non-zero), and the VM has non-coherent DMA
> * (DMA doesn't snoop CPU caches), KVM's ABI is to honor the memtype
> * from the guest's MTRRs so that guest accesses to memory that is
> * DMA'd aren't cached against the guest's wishes.
> *
> * Note, KVM may still ultimately ignore guest MTRRs for certain PFNs,
> * e.g. KVM will force UC memtype for host MMIO.
> */
> return vm_has_noncoherent_dma && tdp_enabled && shadow_memtype_mask;
> }
>
> mmu.h:
>
> bool __kvm_mmu_honors_guest_mtrrs(struct kvm *kvm, bool vm_has_noncoherent_dma);
>
> static inline bool kvm_mmu_honors_guest_mtrrs(struct kvm *kvm)
> {
>
> return __kvm_mmu_honors_guest_mtrrs(kvm, kvm_arch_has_noncoherent_dma(kvm));
> }
This should work and it centralizes the comments into one place, though I dislike
having to pass true as vm_has_noncoherent_dma in case of 1->0 transition. :)
>
> > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> > index 41d7bb51a297..ad0c43d7f532 100644
> > --- a/arch/x86/kvm/x86.c
> > +++ b/arch/x86/kvm/x86.c
> > @@ -13146,13 +13146,19 @@ EXPORT_SYMBOL_GPL(kvm_arch_has_assigned_device);
> >
> > void kvm_arch_register_noncoherent_dma(struct kvm *kvm)
> > {
> > - atomic_inc(&kvm->arch.noncoherent_dma_count);
> > + if (atomic_inc_return(&kvm->arch.noncoherent_dma_count) == 1) {
> > + if (kvm_mmu_cap_honors_guest_mtrrs(kvm))
> > + kvm_zap_gfn_range(kvm, 0, ~0ULL);
>
> No need for multiple if statements. Though rather than have identical code in
> both the start/end paths, how about this? That provides a single location for a
> comment. Or maybe first/last instead of start/end?
>
> static void kvm_noncoherent_dma_start_or_end(struct kvm *kvm)
What does start_or_end or first_or_last stand for?
> {
> /* comment goes here. */
> if (__kvm_mmu_honors_guest_mtrrs(kvm, true))
> kvm_zap_gfn_range(kvm, 0, ~0ULL);
> }
>
> void kvm_arch_register_noncoherent_dma(struct kvm *kvm)
> {
> if (atomic_inc_return(&kvm->arch.noncoherent_dma_count) == 1)
> kvm_noncoherent_dma_start_or_end(kvm);
> }
> EXPORT_SYMBOL_GPL(kvm_arch_register_noncoherent_dma);
>
> void kvm_arch_unregister_noncoherent_dma(struct kvm *kvm)
> {
> if (!atomic_dec_return(&kvm->arch.noncoherent_dma_count))
> kvm_noncoherent_dma_start_or_end(kvm);
> }
> EXPORT_SYMBOL_GPL(kvm_arch_unregister_noncoherent_dma);
>