Re: [PATCH 10/16] KVM: x86/tdp_mmu: Support TDX private mapping for TDP MMU
From: Isaku Yamahata
Date: Tue May 28 2024 - 21:06:41 EST
On Tue, May 28, 2024 at 06:29:59PM +0000,
"Edgecombe, Rick P" <rick.p.edgecombe@xxxxxxxxx> wrote:
> On Tue, 2024-05-28 at 19:16 +0200, Paolo Bonzini wrote:
> > > > After this, gfn_t's never have shared bit. It's a simple rule. The MMU
> > > > mostly
> > > > thinks it's operating on a shared root that is mapped at the normal GFN.
> > > > Only
> > > > the iterator knows that the shared PTEs are actually in a different
> > > > location.
> > > >
> > > > There are some negative side effects:
> > > > 1. The struct kvm_mmu_page's gfn doesn't match it's actual mapping
> > > > anymore.
> > > > 2. As a result of above, the code that flushes TLBs for a specific GFN
> > > > will be
> > > > confused. It won't functionally matter for TDX, just look buggy to see
> > > > flushing
> > > > code called with the wrong gfn.
> > >
> > > flush_remote_tlbs_range() is only for Hyper-V optimization. In other cases,
> > > x86_op.flush_remote_tlbs_range = NULL or the member isn't defined at compile
> > > time. So the remote tlb flush falls back to flushing whole range. I don't
> > > expect TDX in hyper-V guest. I have to admit that the code looks
> > > superficially
> > > broken and confusing.
> >
> > You could add an "&& kvm_has_private_root(kvm)" to
> > kvm_available_flush_remote_tlbs_range(), since
> > kvm_has_private_root(kvm) is sort of equivalent to "there is no 1:1
> > correspondence between gfn and PTE to be flushed".
> >
> > I am conflicted myself, but the upsides below are pretty substantial.
>
> It looks like kvm_available_flush_remote_tlbs_range() is not checked in many of
> the paths that get to x86_ops.flush_remote_tlbs_range().
>
> So maybe something like:
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 65bbda95acbb..e09bb6c50a0b 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -1959,14 +1959,7 @@ static inline int kvm_arch_flush_remote_tlbs(struct kvm
> *kvm)
>
> #if IS_ENABLED(CONFIG_HYPERV)
> #define __KVM_HAVE_ARCH_FLUSH_REMOTE_TLBS_RANGE
> -static inline int kvm_arch_flush_remote_tlbs_range(struct kvm *kvm, gfn_t gfn,
> - u64 nr_pages)
> -{
> - if (!kvm_x86_ops.flush_remote_tlbs_range)
> - return -EOPNOTSUPP;
> -
> - return static_call(kvm_x86_flush_remote_tlbs_range)(kvm, gfn, nr_pages);
> -}
> +int kvm_arch_flush_remote_tlbs_range(struct kvm *kvm, gfn_t gfn, u64 nr_pages);
> #endif /* CONFIG_HYPERV */
>
> enum kvm_intr_type {
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 43d70f4c433d..9dc1b3db286d 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -14048,6 +14048,14 @@ int kvm_sev_es_string_io(struct kvm_vcpu *vcpu,
> unsigned int size,
> }
> EXPORT_SYMBOL_GPL(kvm_sev_es_string_io);
>
> +int kvm_arch_flush_remote_tlbs_range(struct kvm *kvm, gfn_t gfn, u64 nr_pages)
> +{
> + if (!kvm_x86_ops.flush_remote_tlbs_range || kvm_gfn_direct_mask(kvm))
> + return -EOPNOTSUPP;
> +
> + return static_call(kvm_x86_flush_remote_tlbs_range)(kvm, gfn, nr_pages);
> +}
> +
> EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_entry);
> EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_exit);
> EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_mmio);
kvm_x86_ops.flush_remote_tlbs_range() is defined only when CONFIG_HYPERV=y.
We need #ifdef __KVM_HAVE_ARCH_FLUSH_REMOTE_TLBS_RANGE ... #endif around the
function.
--
Isaku Yamahata <isaku.yamahata@xxxxxxxxx>