[PATCH RFC 1/1] lockdep: Raise default stack trace limits when KASAN is enabled
From: Mikhail Gavrilov
Date: Fri Mar 13 2026 - 13:19:12 EST
KASAN-enabled kernels with LOCKDEP and PREEMPT_FULL hit
"BUG: MAX_STACK_TRACE_ENTRIES too low!" within 9-23 hours of normal
desktop use.
The root cause is a feedback loop between KASAN slab tracking and
lockdep: every KASAN-tracked slab allocation saves a stack trace via
stack_trace_save() -> arch_stack_walk(). The unwinder calls
is_bpf_text_address(), which under PREEMPT_FULL can trigger RCU
deferred quiescent-state processing -> swake_up_one() -> lock_acquire()
-> lockdep validate_chain() -> save_trace(). This means KASAN's own
stack captures indirectly generate new lockdep dependency chains,
consuming the buffer from both directions.
/proc/lockdep_stats at the moment of overflow confirms that
stack-trace entries is the sole exhausted resource:
stack-trace entries: 524288 [max: 524288] <- 100% full
number of stack traces: 22080 <- unique after dedup
dependency chains: 164665 [max: 524288] <- only 31% used
direct dependencies: 45270 [max: 65536] <- 69%
lock-classes: 2811 [max: 8192] <- 34%
22080 genuinely unique traces averaging ~24 frames each fill the
buffer in under a day. The hash-based deduplication (12593b7467f9) is
working correctly -- the traces are simply all different due to the
deep and varied call stacks from GPU + filesystem + Wine/Proton + KASAN
instrumentation.
Raise the LOCKDEP_STACK_TRACE_BITS default from 19 to 21 when KASAN is
enabled (2M entries, +12MB). This is negligible compared to KASAN's
own shadow memory overhead (~12.5% of total RAM). Scale
LOCKDEP_STACK_TRACE_HASH_BITS accordingly to maintain dedup efficiency.
Signed-off-by: Mikhail Gavrilov <mikhail.v.gavrilov@xxxxxxxxx>
---
lib/Kconfig.debug | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index 93f356d2b3d9..813654204563 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -1618,14 +1618,22 @@ config LOCKDEP_STACK_TRACE_BITS
int "Size for MAX_STACK_TRACE_ENTRIES (as Nth power of 2)"
depends on LOCKDEP && !LOCKDEP_SMALL
range 10 26
+ default 21 if KASAN
default 19
help
Try increasing this value if you hit "BUG: MAX_STACK_TRACE_ENTRIES too low!" message.
+ KASAN significantly increases stack trace consumption because its
+ slab tracking interacts with lockdep's dependency validation under
+ PREEMPT_FULL, creating a feedback loop. The higher default when
+ KASAN is enabled costs ~12MB extra, which is negligible compared to
+ KASAN's own shadow memory overhead.
+
config LOCKDEP_STACK_TRACE_HASH_BITS
int "Size for STACK_TRACE_HASH_SIZE (as Nth power of 2)"
depends on LOCKDEP && !LOCKDEP_SMALL
range 10 26
+ default 16 if KASAN
default 14
help
Try increasing this value if you need large STACK_TRACE_HASH_SIZE.
--
2.53.0