Re: [PATCH v12 11/29] KVM: SEV: Add KVM_SEV_SNP_LAUNCH_UPDATE command

From: Isaku Yamahata
Date: Wed Apr 03 2024 - 11:45:53 EST


On Wed, Apr 03, 2024 at 02:51:59PM +0200,
Paolo Bonzini <pbonzini@xxxxxxxxxx> wrote:

> On Wed, Apr 3, 2024 at 12:58 AM Isaku Yamahata <isaku.yamahata@xxxxxxxxx> wrote:
> > I think TDX can use it with slight change. Pass vcpu instead of KVM, page pin
> > down and mmu_lock. TDX requires non-leaf Secure page tables to be populated
> > before adding a leaf. Maybe with the assumption that vcpu doesn't run, GFN->PFN
> > relation is stable so that mmu_lock isn't needed? What about punch hole?
> >
> > The flow would be something like as follows.
> >
> > - lock slots_lock
> >
> > - kvm_gmem_populate(vcpu)
> > - pin down source page instead of do_memcopy.
>
> Both pinning the source page and the memcpy can be done in the
> callback. I think the right thing to do is:
>
> 1) eliminate do_memcpy, letting AMD code taking care of
> copy_from_user.
>
> 2) pass to the callback only gfn/pfn/src, where src is computed as
>
> args->src ? args->src + i * PAGE_SIZE : NULL
>
> If another architecture/vendor needs do_memcpy, they can add
> something like kvm_gmem_populate_copy.
>
> > - get pfn with __kvm_gmem_get_pfn()
> > - read lock mmu_lock
> > - in the post_populate callback
> > - lookup tdp mmu page table to check if the table is populated.
> > lookup only version of kvm_tdp_mmu_map().
> > We need vcpu instead of kvm.
>
> Passing vcpu can be done using the opaque callback argument to
> kvm_gmem_populate.
>
> Likewise, the mmu_lock can be taken by the TDX post_populate
> callback.

Yes, it should work. Let me give it a try.
--
Isaku Yamahata <isaku.yamahata@xxxxxxxxx>