[tip: locking/urgent] softirq: Avoid bad tracing / lockdep interaction

From: tip-bot2 for Peter Zijlstra
Date: Fri Dec 18 2020 - 11:04:08 EST


The following commit has been merged into the locking/urgent branch of tip:

Commit-ID: 91ea62d58bd661827c328a2c6c02a87fa4aae88b
Gitweb: https://git.kernel.org/tip/91ea62d58bd661827c328a2c6c02a87fa4aae88b
Author: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
AuthorDate: Fri, 18 Dec 2020 16:39:14 +01:00
Committer: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
CommitterDate: Fri, 18 Dec 2020 16:53:13 +01:00

softirq: Avoid bad tracing / lockdep interaction

Similar to commit:

1a63dcd8765b ("softirq: Reorder trace_softirqs_on to prevent lockdep splat")

__local_bh_enable_ip() can also call into tracing with inconsistent
state. Unlike that commit we don't need to bother about the tracepoint
because 'cnt-1' never matches preempt_count() (by construction).

Reported-by: Heiko Carstens <hca@xxxxxxxxxxxxx>
Signed-off-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx>
Tested-by: Heiko Carstens <hca@xxxxxxxxxxxxx>
Link: https://lkml.kernel.org/r/20201218154519.GW3092@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
---
kernel/softirq.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/softirq.c b/kernel/softirq.c
index 09229ad..0f1d3a3 100644
--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -185,7 +185,7 @@ void __local_bh_enable_ip(unsigned long ip, unsigned int cnt)
* Keep preemption disabled until we are done with
* softirq processing:
*/
- preempt_count_sub(cnt - 1);
+ __preempt_count_sub(cnt - 1);

if (unlikely(!in_interrupt() && local_softirq_pending())) {
/*