Re: [PATCH v15 22/23] x86/mce: Improve error log of kernel space TDX #MC due to erratum

From: Huang, Kai
Date: Mon Dec 04 2023 - 16:01:15 EST


On Mon, 2023-12-04 at 09:07 -0800, Dave Hansen wrote:
> On 12/3/23 03:44, Huang, Kai wrote:
> ...
> > > It doesn't need perfect accuracy. But how do we know it's not going to
> > > go, for instance, chase a bad pointer?
> > >
> > > > + if (tdx_module_status != TDX_MODULE_INITIALIZED)
> > > > + return false;
> > >
> > > As an example, what prevents this CPU from observing
> > > tdx_module_status==TDX_MODULE_INITIALIZED while the PAMT structure is
> > > being assembled?
> >
> > There are two types of memory order serializing operations between assembling
> > the TDMR/PAMT structure and setting the tdx_module_status to
> > TDX_MODULE_INITIALIZED: 1) wbvind_on_all_cpus(); 2) bunch of SEAMCALLs;
> >
> > WBINVD is a serializing instruction. SEAMCALL is a VMEXIT to the TDX module,
> > which involves GDT/LDT/control registers/MSRs switch so it is also a serializing
> > operation.
> >
> > But perhaps we can explicitly add a smp_wmb() between assembling TDMR/PAMT
> > structure and setting tdx_module_status if that's better.
>
> ... and there's zero documentation of this dependency because ... ?
>
> I suspect it's because it was never looked at until Tony made a comment
> about it and we started looking at it. In other words, it worked by
> coincidence.

I should have put a comment around here. My bad.

Kirill also helped to look at the code.

>
> > > > + for (i = 0; i < tdmr_list->nr_consumed_tdmrs; i++) {
> > > > + unsigned long base, size;
> > > > +
> > > > + tdmr_get_pamt(tdmr_entry(tdmr_list, i), &base, &size);
> > > > +
> > > > + if (phys >= base && phys < (base + size))
> > > > + return true;
> > > > + }
> > > > +
> > > > + return false;
> > > > +}
> > > > +
> > > > +/*
> > > > + * Return whether the memory page at the given physical address is TDX
> > > > + * private memory or not. Called from #MC handler do_machine_check().
> > > > + *
> > > > + * Note this function may not return an accurate result in rare cases.
> > > > + * This is fine as the #MC handler doesn't need a 100% accurate result,
> > > > + * because it cannot distinguish #MC between software bug and real
> > > > + * hardware error anyway.
> > > > + */
> > > > +bool tdx_is_private_mem(unsigned long phys)
> > > > +{
> > > > + struct tdx_module_args args = {
> > > > + .rcx = phys & PAGE_MASK,
> > > > + };
> > > > + u64 sret;
> > > > +
> > > > + if (!platform_tdx_enabled())
> > > > + return false;
> > > > +
> > > > + /* Get page type from the TDX module */
> > > > + sret = __seamcall_ret(TDH_PHYMEM_PAGE_RDMD, &args);
> > > > + /*
> > > > + * Handle the case that CPU isn't in VMX operation.
> > > > + *
> > > > + * KVM guarantees no VM is running (thus no TDX guest)
> > > > + * when there's any online CPU isn't in VMX operation.
> > > > + * This means there will be no TDX guest private memory
> > > > + * and Secure-EPT pages. However the TDX module may have
> > > > + * been initialized and the memory page could be PAMT.
> > > > + */
> > > > + if (sret == TDX_SEAMCALL_UD)
> > > > + return is_pamt_page(phys);
> > >
> > > Either this is comment is wonky or the module initialization is buggy.
> > >
> > > config_global_keyid() goes and does SEAMCALLs on all CPUs. There are
> > > zero checks or special handling in there for whether the CPU has done
> > > VMXON. So, by the time we've started initializing the TDX module
> > > (including the PAMT), all online CPUs must be able to do SEAMCALLs. Right?
> > >
> > > So how can we have a working PAMT here when this CPU can't do SEAMCALLs?
> >
> > The corner case is KVM can enable VMX on all cpus, initialize the TDX module,
> > and then disable VMX on all cpus. One example is KVM can be unloaded after it
> > initializes the TDX module.
> >
> > In this case CPU cannot do SEAMCALL but PAMTs are already working :-)
> >
> > However if SEAMCALL cannot be made (due to out of VMX), then the module can only
> > be initialized or the initialization hasn't been tried, so both
> > tdx_module_status and the tdx_tdmr_list are stable to access.
>
> None of this even matters. Let's remind ourselves how unbelievably
> unlikely this is:
>
> 1. You're on an affected system that has the erratum
> 2. The KVM module gets unloaded, runs vmxoff
> 3. A kernel bug using a very rare partial write corrupts the PAMT
> 4. A second bug reads the PAMT consuming poison, #MC is generated
> 5. Enter #MC handler, SEAMCALL fails
> 6. #MC handler just reports a plain hardware error

Yes totally agree it is very unlikely to happen.

>
> The only thing even remotely wrong with this situation is that the
> report won't pin the #MC on TDX. Play stupid games (removing modules),
> win stupid prizes (worse error message).
>
> Can we dynamically mark a module as unsafe to remove? If so, I'd
> happily just say that we should make kvm_intel.ko unsafe to remove when
> TDX is supported and move on with life.
>
> tl;dr: I think even looking a #MC on the PAMT after the kvm module is
> removed is a fool's errand.

Sorry I wasn't clear enough. KVM actually turns off VMX when it destroys the
last VM, so the KVM module doesn't need to be removed to turn off VMX. I used
"KVM can be unloaded" as an example to explain the PAMT can be working when VMX
is off.