Re: [PATCH Part1 v5 32/38] x86/sev: enable SEV-SNP-validated CPUID in #VC handlers

From: Michael Roth
Date: Mon Aug 30 2021 - 11:56:09 EST


On Fri, Aug 27, 2021 at 10:47:42AM -0500, Brijesh Singh wrote:
>
> On 8/27/21 10:18 AM, Borislav Petkov wrote:
> > On Fri, Aug 20, 2021 at 10:19:27AM -0500, Brijesh Singh wrote:
> >> From: Michael Roth <michael.roth@xxxxxxx>
> >>
> >> This adds support for utilizing the SEV-SNP-validated CPUID table in
> > s/This adds support for utilizing/Utilize/
> >
> > Yap, it can really be that simple. :)
> >
> >> the various #VC handler routines used throughout boot/run-time. Mostly
> >> this is handled by re-using the CPUID lookup code introduced earlier
> >> for the boot/compressed kernel, but at various stages of boot some work
> >> needs to be done to ensure the CPUID table is set up and remains
> >> accessible throughout. The following init routines are introduced to
> >> handle this:
> > Do not talk about what your patch does - that should hopefully be
> > visible in the diff itself. Rather, talk about *why* you're doing what
> > you're doing.
> >
> >> sev_snp_cpuid_init():
> > This one is not really introduced - it is already there.
> >
> > <snip all the complex rest>
> >
> > So this patch is making my head spin. It seems we're dancing a lot of
> > dance just to have our CPUID page present at all times. Which begs the
> > question: do we need it during the whole lifetime of the guest?
>
> Mike can correct me,  we need it for entire lifetime of the guest. 
> Whenever guest needs the CPUID value, the #VC handler will refer to this
> page.

That's right, and cpuid instructions can get introduced at pretty much
every stage of the boot process.

>
>
> > Regardless, I think this can be simplified by orders of
> > magnitude if we allocated statically 4K for that CPUID page in
> > arch/x86/boot/compressed/mem_encrypt.S, copied the supplied CPUID page
> > from the firmware to it and from now on, work with our own copy.
>
> Actually a  VMM could populate more than one page for the CPUID. One
> page can include 64 entries and I believe Mike is already running into
> limits (with Qemu) and exploring the ideas to extend it more than a page.

I added the range checks in this version so that a hypervisor can still
leave out all-zero entries, so I think it can be avoided near-term at
least, but yes, still a possibility we might need an extra one in the
future, not sure how scarce storage is for stuff like __ro_after_init, so
worth considering.