Re: [PATCH 20/30] KVM: TDX: create/destroy VM structure

From: Sean Christopherson
Date: Thu Feb 20 2025 - 20:09:00 EST


On Thu, Feb 20, 2025, Sean Christopherson wrote:
> TL;DR: Please don't merge this patch to kvm/next or kvm/queue.
>
> On Thu, Feb 20, 2025, Paolo Bonzini wrote:
> > @@ -72,8 +94,10 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
> > .has_emulated_msr = vmx_has_emulated_msr,
> >
> > .vm_size = sizeof(struct kvm_vmx),
> > - .vm_init = vmx_vm_init,
> > - .vm_destroy = vmx_vm_destroy,
> > +
> > + .vm_init = vt_vm_init,
> > + .vm_destroy = vt_vm_destroy,
> > + .vm_free = vt_vm_free,
> >
> > .vcpu_precreate = vmx_vcpu_precreate,
> > .vcpu_create = vmx_vcpu_create,
>
> ...
>
> > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> > index 374d89e6663f..e0b9b845df58 100644
> > --- a/arch/x86/kvm/x86.c
> > +++ b/arch/x86/kvm/x86.c
> > @@ -12884,6 +12884,7 @@ void kvm_arch_destroy_vm(struct kvm *kvm)
> > kvm_page_track_cleanup(kvm);
> > kvm_xen_destroy_vm(kvm);
> > kvm_hv_destroy_vm(kvm);
> > + static_call_cond(kvm_x86_vm_free)(kvm);
> > }
>
> Sorry to throw a wrench in things, but I have a fix that I want to send for 6.14[1],
> i.e. before this code, and to land that fix I need/want to destroy vCPUs before
> calling kvm_x86_ops.vm_destroy(). *sigh*
>
> The underlying issue is that both nVMX and nSVM suck and access all manner of VM-wide
> state when destroying a vCPU that is currently in nested guest mode, and I want
> to fix the most pressing issue of destroying vCPUs at a random time once and for
> all. nVMX and nSVM also need to be cleaned up to not access so much darn state,
> but I'm worried that "fixing" the nested cases will only whack the biggest mole.

...

> And so my plan is to carved out a kvm_destroy_mmus() helper, which can then call
> the TDX hook to release/reclaim the HKID, which I assume needs to be done after
> KVM's general MMU destruction, but before vCPUs are freed.

...

> void kvm_arch_destroy_vm(struct kvm *kvm)
> {
> /*
> * WARNING! MMUs must be destroyed before vCPUs, and vCPUs must be
> * destroyed before any VM state. Most MMU state is VM-wide, but is
> * tracked per-vCPU, and so must be unloaded/freed in its entirety
> * before any one vCPU is destroyed.

Argh, after digging more, this isn't actually true. The separate "unload MMUs"
code is leftover from when KVM rather stupidly tried to free all MMU pages when
freeing a vCPU.

Commit 7b53aa565084 ("KVM: Fix vcpu freeing for guest smp") "fixed" things by
unloading MMUs before destroying vCPUs, but the underlying problem was trying to
free _all_ MMU pages when destroying a single vCPU. That eventually got fixed
for good (haven't checked when), but the separate MMU unloading never got cleaned
up.

So, scratch the mmu_destroy() idea. But I still want/need to move vCPU destruction
before vm_destroy.

Now that kvm_arch_pre_destroy_vm() is a thing, one idea would be to add
kvm_x86_ops.pre_destroy_vm(), which would pair decently well with the existing
call to kvm_mmu_pre_destroy_vm().