Re: [PATCH v3 13/17] KVM: x86/tdp_mmu: Support mirror root for TDP MMU

From: Yan Zhao
Date: Wed Jun 26 2024 - 01:06:32 EST


On Wed, Jun 26, 2024 at 04:33:20AM +0800, Edgecombe, Rick P wrote:
> On Tue, 2024-06-25 at 13:43 +0800, Yan Zhao wrote:
> > > > > I was originally suspicious of the asymmetry of the tear down of mirror
> > > > > and
> > > > > direct roots vs the allocation. Do you see a concrete problem, or just
> > > > > advocating for safety?
> > > IMO it's a concrete problem, though rare up to now. e.g.
> > >
> > > After repeatedly hot-plugping and hot-unplugping memory, which increases
> > > memslots generation, kvm_mmu_zap_all_fast() will be called to invalidate >
> > > direct
> > > roots when the memslots generation wraps around.
>
> Hmm, yes. I'm not sure about putting the check there though. It adds even more
> confusion to the lifecycle.
> - mirror_root_hpa != INVALID_PAGE check in a different placed than
> root.hpa != INVALID_PAGE check.
> - they get allocated in the same place
> - they are torn down in the different places.
>
> Can you think of clearer fix for it. Maybe we can just move the mirror root
> allocation such that it's not subjected to the reload path? Like something that
> matches the tear down in kvm_mmu_destroy()?
But we still need the reload path to have each vcpu to hold a ref count of the
mirror root.
What about below fix?

diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
index 026e8edfb0bd..4decd13457ec 100644
--- a/arch/x86/kvm/mmu.h
+++ b/arch/x86/kvm/mmu.h
@@ -127,9 +127,28 @@ void kvm_mmu_sync_prev_roots(struct kvm_vcpu *vcpu);
void kvm_mmu_track_write(struct kvm_vcpu *vcpu, gpa_t gpa, const u8 *new,
int bytes);

+static inline bool kvm_has_mirrored_tdp(const struct kvm *kvm)
+{
+ return kvm->arch.vm_type == KVM_X86_TDX_VM;
+}
+
+static inline bool kvm_mmu_root_hpa_is_invalid(struct kvm_vcpu *vcpu)
+{
+ return vcpu->arch.mmu->root.hpa == INVALID_PAGE;
+}
+
+static inline bool kvm_mmu_mirror_root_hpa_is_invalid(struct kvm_vcpu *vcpu)
+{
+ if (!kvm_has_mirrored_tdp(vcpu->kvm))
+ return false;
+
+ return vcpu->arch.mmu->mirror_root_hpa == INVALID_PAGE;
+}
+
static inline int kvm_mmu_reload(struct kvm_vcpu *vcpu)
{
- if (likely(vcpu->arch.mmu->root.hpa != INVALID_PAGE))
+ if (!kvm_mmu_root_hpa_is_invalid(vcpu) &&
+ !kvm_mmu_mirror_root_hpa_is_invalid(vcpu))
return 0;

return kvm_mmu_load(vcpu);
@@ -322,11 +341,6 @@ static inline gpa_t kvm_translate_gpa(struct kvm_vcpu *vcpu,
return translate_nested_gpa(vcpu, gpa, access, exception);
}

-static inline bool kvm_has_mirrored_tdp(const struct kvm *kvm)
-{
- return kvm->arch.vm_type == KVM_X86_TDX_VM;
-}
-
static inline gfn_t kvm_gfn_direct_bits(const struct kvm *kvm)
{
return kvm->arch.gfn_direct_bits;
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index e1299eb03e63..5e7d92074f70 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -3702,9 +3702,11 @@ static int mmu_alloc_direct_roots(struct kvm_vcpu *vcpu)
int r;

if (tdp_mmu_enabled) {
- if (kvm_has_mirrored_tdp(vcpu->kvm))
+ if (kvm_mmu_mirror_root_hpa_is_invalid(vcpu))
kvm_tdp_mmu_alloc_root(vcpu, true);
- kvm_tdp_mmu_alloc_root(vcpu, false);
+
+ if (kvm_mmu_root_hpa_is_invalid(vcpu))
+ kvm_tdp_mmu_alloc_root(vcpu, false);
return 0;
}