Re: [PATCH v2] x86/virt/tdx: Fix lockdep assertion failure in cache flush for kexec

From: Nikolay Borisov

Date: Thu Mar 05 2026 - 13:34:08 EST




On 2.03.26 г. 12:22 ч., Kai Huang wrote:
TDX can leave the cache in an incoherent state for the memory it uses.
During kexec the kernel does a WBINVD for each CPU before memory gets
reused in the second kernel.

There were two considerations for where this WBINVD should happen. In
order to handle cases where the cache might get into an incoherent state
while the kexec is in the initial stages, it is needed to do this later
in the kexec path, when the kexecing CPU stops all remote CPUs. However,
the later kexec process is sensitive to existing races. So to avoid
perturbing that operation, it is better to do it earlier.

The existing solution is to track the need for the kexec time WBINVD
generically (i.e., not just for TDX) in a per-cpu var. The late
invocation only happens if the earlier TDX specific logic in
tdx_cpu_flush_cache_for_kexec() didn’t take care of the work. This
earlier WBINVD logic was built into KVM’s existing syscore ops shutdown()
handler, which is called earlier in the kexec path.

However, this accidentally added it to KVM’s unload path as well (also
the "error path" when bringing up TDX during KVM module load), which
uses the same internal functions. This makes some sense too, though,
because if KVM is getting unloaded, TDX cache affecting operations will
likely cease. So it is a good point to do the work before KVM is
unloaded and won't have a chance to handle the shutdown operation in the
future.

Unfortunately this KVM unload invocation triggers a lockdep warning in
tdx_cpu_flush_cache_for_kexec(). Since tdx_cpu_flush_cache_for_kexec()
is doing WBINVD on a specific CPU, it has an assert for preemption being
disabled. This works fine for the kexec time invocation, but the KVM
unload path calls this as part of a CPUHP callback for which, despite
always executing on the target CPU, preemption is not disabled.

It might be better to add the earlier invocation logic to a dedicated
arch/x86 TDX syscore shutdown() handler, but to make the fix more
backport friendly just adjust the lockdep assert in the
tdx_cpu_flush_cache_for_kexec().

The real requirement is tdx_cpu_flush_cache_for_kexec() must be done on
the same CPU. It's OK that it can be preempted in the middle as long as
it won't be rescheduled to another CPU.

TLDR: It wants migration disabled.


Remove the too strong lockdep_assert_preemption_disabled(), and change
this_cpu_{read|write}() to __this_cpu_{read|write}() which provide the more
proper check (when CONFIG_DEBUG_PREEMPT is true), which checks all
conditions that the context cannot be moved to another CPU to run in the
middle.

Fixes: 61221d07e815 ("KVM/TDX: Explicitly do WBINVD when no more TDX SEAMCALLs")
Cc: stable@xxxxxxxxxxxxxxx
Reported-by: Vishal Verma <vishal.l.verma@xxxxxxxxx>
Signed-off-by: Kai Huang <kai.huang@xxxxxxxxx>
Tested-by: Vishal Verma <vishal.l.verma@xxxxxxxxx>


So how exactly does this patch prevent the BUG: printk in check_preemption_disabled from triggering, if the lockdep assert was triggering?