[tip: locking/core] lockdep: Raise default stack trace limits when KASAN is enabled

From: tip-bot2 for Mikhail Gavrilov

Date: Wed Mar 18 2026 - 04:09:17 EST


The following commit has been merged into the locking/core branch of tip:

Commit-ID: 891626973b2faf468565a253ca55373e0b9675de
Gitweb: https://git.kernel.org/tip/891626973b2faf468565a253ca55373e0b9675de
Author: Mikhail Gavrilov <mikhail.v.gavrilov@xxxxxxxxx>
AuthorDate: Fri, 13 Mar 2026 22:10:02 +05:00
Committer: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
CommitterDate: Mon, 16 Mar 2026 13:16:49 +01:00

lockdep: Raise default stack trace limits when KASAN is enabled

KASAN-enabled kernels with LOCKDEP and PREEMPT_FULL hit
"BUG: MAX_STACK_TRACE_ENTRIES too low!" within 9-23 hours of normal
desktop use.

The root cause is a feedback loop between KASAN slab tracking and
lockdep: every KASAN-tracked slab allocation saves a stack trace via
stack_trace_save() -> arch_stack_walk(). The unwinder calls
is_bpf_text_address(), which under PREEMPT_FULL can trigger RCU
deferred quiescent-state processing -> swake_up_one() -> lock_acquire()
-> lockdep validate_chain() -> save_trace(). This means KASAN's own
stack captures indirectly generate new lockdep dependency chains,
consuming the buffer from both directions.

/proc/lockdep_stats at the moment of overflow confirms that
stack-trace entries is the sole exhausted resource:

stack-trace entries: 524288 [max: 524288] <- 100% full
number of stack traces: 22080 <- unique after dedup
dependency chains: 164665 [max: 524288] <- only 31% used
direct dependencies: 45270 [max: 65536] <- 69%
lock-classes: 2811 [max: 8192] <- 34%

22080 genuinely unique traces averaging ~24 frames each fill the
buffer in under a day. The hash-based deduplication (12593b7467f9) is
working correctly -- the traces are simply all different due to the
deep and varied call stacks from GPU + filesystem + Wine/Proton + KASAN
instrumentation.

Raise the LOCKDEP_STACK_TRACE_BITS default from 19 to 21 when KASAN is
enabled (2M entries, +12MB). This is negligible compared to KASAN's
own shadow memory overhead (~12.5% of total RAM). Scale
LOCKDEP_STACK_TRACE_HASH_BITS accordingly to maintain dedup efficiency.

Signed-off-by: Mikhail Gavrilov <mikhail.v.gavrilov@xxxxxxxxx>
Signed-off-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx>
Link: https://patch.msgid.link/20260313171118.1702954-2-mikhail.v.gavrilov@xxxxxxxxx
---
lib/Kconfig.debug | 8 ++++++++
1 file changed, 8 insertions(+)

diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index 4e2dfbb..e51e3c5 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -1617,14 +1617,22 @@ config LOCKDEP_STACK_TRACE_BITS
int "Size for MAX_STACK_TRACE_ENTRIES (as Nth power of 2)"
depends on LOCKDEP && !LOCKDEP_SMALL
range 10 26
+ default 21 if KASAN
default 19
help
Try increasing this value if you hit "BUG: MAX_STACK_TRACE_ENTRIES too low!" message.

+ KASAN significantly increases stack trace consumption because its
+ slab tracking interacts with lockdep's dependency validation under
+ PREEMPT_FULL, creating a feedback loop. The higher default when
+ KASAN is enabled costs ~12MB extra, which is negligible compared to
+ KASAN's own shadow memory overhead.
+
config LOCKDEP_STACK_TRACE_HASH_BITS
int "Size for STACK_TRACE_HASH_SIZE (as Nth power of 2)"
depends on LOCKDEP && !LOCKDEP_SMALL
range 10 26
+ default 16 if KASAN
default 14
help
Try increasing this value if you need large STACK_TRACE_HASH_SIZE.