Re: [PATCH] x86/virt/tdx: Fix lockdep assertion failure in cache flush for kexec
From: Huang, Kai
Date: Tue Mar 10 2026 - 03:19:31 EST
On Mon, 2026-03-09 at 16:38 +0000, Edgecombe, Rick P wrote:
> On Mon, 2026-03-02 at 23:22 +1300, Kai Huang wrote:
> > TDX can leave the cache in an incoherent state for the memory it
> > uses. During kexec the kernel does a WBINVD for each CPU before
> > memory gets reused in the second kernel.
> >
> > There were two considerations for where this WBINVD should happen.
> > In order to handle cases where the cache might get into an incoherent
> > state while the kexec is in the initial stages, it is needed to do
> > this later in the kexec path, when the kexecing CPU stops all remote
> > CPUs. However, the later kexec process is sensitive to existing
> > races. So to avoid perturbing that operation, it is better to do it
> > earlier.
> >
> > The existing solution is to track the need for the kexec time WBINVD
> > generically (i.e., not just for TDX) in a per-cpu var. The late
> > invocation only happens if the earlier TDX specific logic in
> > tdx_cpu_flush_cache_for_kexec() didn’t take care of the work. This
> > earlier WBINVD logic was built into KVM’s existing syscore ops
> > shutdown() handler, which is called earlier in the kexec path.
> >
> > However, this accidentally added it to KVM’s unload path as well
> > (also the "error path" when bringing up TDX during KVM module load),
> > which uses the same internal functions. This makes some sense too,
> > though, because if KVM is getting unloaded, TDX cache affecting
> > operations will likely cease. So it is a good point to do the work
> > before KVM is unloaded and won't have a chance to handle the shutdown
> > operation in the future.
> >
> > Unfortunately this KVM unload invocation triggers a lockdep warning
> > in tdx_cpu_flush_cache_for_kexec(). Since
> > tdx_cpu_flush_cache_for_kexec() is doing WBINVD on a specific CPU, it
> > has an assert for preemption being disabled. This works fine for the
> > kexec time invocation, but the KVM unload path calls this as part of
> > a CPUHP callback for which, despite always executing on the target
> > CPU, preemption is not disabled.
> >
> > It might be better to add the earlier invocation logic to a dedicated
> > arch/x86 TDX syscore shutdown() handler, but to make the fix more
> > backport friendly just adjust the lockdep assert in the
> > tdx_cpu_flush_cache_for_kexec().
> >
> > The real requirement is tdx_cpu_flush_cache_for_kexec() must be done
> > on the same CPU. It's OK that it can be preempted in the middle as
> > long as it won't be rescheduled to another CPU.
> >
> > Remove the too strong lockdep_assert_preemption_disabled(), and
> > change this_cpu_{read|write}() to __this_cpu_{read|write}() which
> > provide the more proper check (when CONFIG_DEBUG_PREEMPT is true),
> > which checks all conditions that the context cannot be moved to
> > another CPU to run in the middle.
> >
> > Fixes: 61221d07e815 ("KVM/TDX: Explicitly do WBINVD when no more TDX
> > SEAMCALLs")
> > Cc: stable@xxxxxxxxxxxxxxx
> > Reported-by: Vishal Verma <vishal.l.verma@xxxxxxxxx>
> > Signed-off-by: Kai Huang <kai.huang@xxxxxxxxx>
> > Tested-by: Vishal Verma <vishal.l.verma@xxxxxxxxx>
>
> Reviewed-by: Rick Edgecombe <rick.p.edgecombe@xxxxxxxxx>
>
> But this issue is also solved by:
> https://lore.kernel.org/kvm/20260307010358.819645-3-rick.p.edgecombe@xxxxxxxxx/
This depends on Sean's series to move VMXON to x86 core, so it's not stable
friendly.
>
> I guess that these changes are correct in either case. There is no need
> for the stricter asserts. But depending on the order the log would be
> confusing in the history when it talks about lockdep warnings. So we'll
> have to keep an eye on things. If this goes first, then it's fine.
I see. Will keep this in mind.
>
> You know, it might have helped to include the splat if you end up with
> a v2.
I thought lockdep warn should be obvious even w/o the actual splat, but fine
I can include the splat if v2 is needed.
Hi Sean, Paolo, Kirill,
It would be good to merge this upstream and backport to stable. Appreciate
if you can ack if it looks good to you? Thanks.