Re: [PATCH v10 11/11] x86/tdx: Handle CPUID via #VE

From: Thomas Gleixner
Date: Thu Oct 14 2021 - 08:01:12 EST


On Fri, Oct 08 2021 at 22:37, Kuppuswamy Sathyanarayanan wrote:
> From: "Kirill A. Shutemov" <kirill.shutemov@xxxxxxxxxxxxxxx>
>
> When running virtualized, the CPUID instruction is handled differently
> based on the leaf being accessed. The behavior depends only on the
> leaf and applies equally to both kernel/ring-0 and userspace/ring-3
> execution of CPUID. Historically, there are two basic classes:
>
> * Leaves handled transparently to the guest
> * Leaves handled by the VMM
>
> In a typical guest without TDX, "handled by the VMM" leaves cause a
> VMEXIT. TDX replaces these VMEXITs with a #VE exception in the guest.
> The guest typically handles the #VE by making a hypercall to the VMM.
>
> The TDX module specification [1], section titled "CPUID Virtualization"
> talks about a few more classes of CPUID handling. But, for the purposes
> of this patch, the "handled transparently" CPUID leaves are all lumped
> together because the guest handling is the same.

What means 'for the purposes of this patch'? And I have no idea what's
lumped together means either.

#VE is either raised on CPUID leaf/sub-leaf combinations which are not
part of the CPUID virtualization table or on request of the guest for
all CPUID invocations (either Ring0 or Ring3 or both).

So this patch implements the #VE handling for EXIT_REASON_CPUID by
handing it through to the hypercall, which in turn lets the TDX module
handle it by invoking the host VMM.

So unless the guest requested #VE on all CPUID invocations it won't see
a #VE for the transparent leaf/sub-leaf combinations. #VE is raised
for the VMM handled ones which goes through the hypercall, right?

I must be missing something, but that last paragraph does not make any
sense to me.

Thanks,

tglx