[PATCH 4.17 069/220] softirq: Reorder trace_softirqs_on to prevent lockdep splat
From: Greg Kroah-Hartman
Date: Sun Jul 01 2018 - 12:39:56 EST
4.17-stable review patch. If anyone has any objections, please let me know.
------------------
From: Joel Fernandes (Google) <joel@xxxxxxxxxxxxxxxxx>
commit 1a63dcd8765bc8680481dc2f9acf6ef13cee6d27 upstream.
I'm able to reproduce a lockdep splat with config options:
CONFIG_PROVE_LOCKING=y,
CONFIG_DEBUG_LOCK_ALLOC=y and
CONFIG_PREEMPTIRQ_EVENTS=y
$ echo 1 > /d/tracing/events/preemptirq/preempt_enable/enable
[ 26.112609] DEBUG_LOCKS_WARN_ON(current->softirqs_enabled)
[ 26.112636] WARNING: CPU: 0 PID: 118 at kernel/locking/lockdep.c:3854
[...]
[ 26.144229] Call Trace:
[ 26.144926] <IRQ>
[ 26.145506] lock_acquire+0x55/0x1b0
[ 26.146499] ? __do_softirq+0x46f/0x4d9
[ 26.147571] ? __do_softirq+0x46f/0x4d9
[ 26.148646] trace_preempt_on+0x8f/0x240
[ 26.149744] ? trace_preempt_on+0x4d/0x240
[ 26.150862] ? __do_softirq+0x46f/0x4d9
[ 26.151930] preempt_count_sub+0x18a/0x1a0
[ 26.152985] __do_softirq+0x46f/0x4d9
[ 26.153937] irq_exit+0x68/0xe0
[ 26.154755] smp_apic_timer_interrupt+0x271/0x280
[ 26.156056] apic_timer_interrupt+0xf/0x20
[ 26.157105] </IRQ>
The issue was this:
preempt_count = 1 << SOFTIRQ_SHIFT
__local_bh_enable(cnt = 1 << SOFTIRQ_SHIFT) {
if (softirq_count() == (cnt && SOFTIRQ_MASK)) {
trace_softirqs_on() {
current->softirqs_enabled = 1;
}
}
preempt_count_sub(cnt) {
trace_preempt_on() {
tracepoint() {
rcu_read_lock_sched() {
// jumps into lockdep
Where preempt_count still has softirqs disabled, but
current->softirqs_enabled is true, and we get a splat.
Link: http://lkml.kernel.org/r/20180607201143.247775-1-joel@xxxxxxxxxxxxxxxxx
Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
Cc: Ingo Molnar <mingo@xxxxxxxxxx>
Cc: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx>
Cc: Mathieu Desnoyers <mathieu.desnoyers@xxxxxxxxxxxx>
Cc: Tom Zanussi <tom.zanussi@xxxxxxxxxxxxxxx>
Cc: Namhyung Kim <namhyung@xxxxxxxxxx>
Cc: Thomas Glexiner <tglx@xxxxxxxxxxxxx>
Cc: Boqun Feng <boqun.feng@xxxxxxxxx>
Cc: Paul McKenney <paulmck@xxxxxxxxxxxxxxxxxx>
Cc: Masami Hiramatsu <mhiramat@xxxxxxxxxx>
Cc: Todd Kjos <tkjos@xxxxxxxxxx>
Cc: Erick Reyes <erickreyes@xxxxxxxxxx>
Cc: Julia Cartwright <julia@xxxxxx>
Cc: Byungchul Park <byungchul.park@xxxxxxx>
Cc: stable@xxxxxxxxxxxxxxx
Reviewed-by: Steven Rostedt (VMware) <rostedt@xxxxxxxxxxx>
Fixes: d59158162e032 ("tracing: Add support for preempt and irq enable/disable events")
Signed-off-by: Joel Fernandes (Google) <joel@xxxxxxxxxxxxxxxxx>
Signed-off-by: Steven Rostedt (VMware) <rostedt@xxxxxxxxxxx>
Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx>
---
kernel/softirq.c | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -139,9 +139,13 @@ static void __local_bh_enable(unsigned i
{
lockdep_assert_irqs_disabled();
+ if (preempt_count() == cnt)
+ trace_preempt_on(CALLER_ADDR0, get_lock_parent_ip());
+
if (softirq_count() == (cnt & SOFTIRQ_MASK))
trace_softirqs_on(_RET_IP_);
- preempt_count_sub(cnt);
+
+ __preempt_count_sub(cnt);
}
/*