Re: [PATCH] printk/tracing: Do not trace printk_nmi_enter()
From: Peter Zijlstra
Date: Fri Sep 07 2018 - 03:45:56 EST
On Thu, Sep 06, 2018 at 11:31:51AM +0900, Sergey Senozhatsky wrote:
> An alternative option, thus, could be re-instating back the rule that
> lockdep_off/on should be the first and the last thing we do in
> nmi_enter/nmi_exit. E.g.
>
> nmi_enter()
> lockdep_off();
> printk_nmi_enter();
>
> nmi_exit()
> printk_nmi_exit();
> lockdep_on();
Yes that. Also, those should probably be inline functions.
---
Subject: locking/lockdep: Fix NMI handling
Someone put code in the NMI handler before lockdep_off(). Since lockdep
is not NMI safe, this wrecks stuff.
Fixes: 42a0bb3f7138 ("printk/nmi: generic solution for safe printk in NMI")
Signed-off-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx>
---
include/linux/hardirq.h | 4 ++--
include/linux/lockdep.h | 11 +++++++++--
kernel/locking/lockdep.c | 12 ------------
3 files changed, 11 insertions(+), 16 deletions(-)
diff --git a/include/linux/hardirq.h b/include/linux/hardirq.h
index 0fbbcdf0c178..8d70270d9486 100644
--- a/include/linux/hardirq.h
+++ b/include/linux/hardirq.h
@@ -62,8 +62,8 @@ extern void irq_exit(void);
#define nmi_enter() \
do { \
- printk_nmi_enter(); \
lockdep_off(); \
+ printk_nmi_enter(); \
ftrace_nmi_enter(); \
BUG_ON(in_nmi()); \
preempt_count_add(NMI_OFFSET + HARDIRQ_OFFSET); \
@@ -78,8 +78,8 @@ extern void irq_exit(void);
BUG_ON(!in_nmi()); \
preempt_count_sub(NMI_OFFSET + HARDIRQ_OFFSET); \
ftrace_nmi_exit(); \
- lockdep_on(); \
printk_nmi_exit(); \
+ lockdep_on(); \
} while (0)
#endif /* LINUX_HARDIRQ_H */
diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
index b0d0b51c4d85..70bb9e8fc8f9 100644
--- a/include/linux/lockdep.h
+++ b/include/linux/lockdep.h
@@ -272,8 +272,15 @@ extern void lockdep_reset_lock(struct lockdep_map *lock);
extern void lockdep_free_key_range(void *start, unsigned long size);
extern asmlinkage void lockdep_sys_exit(void);
-extern void lockdep_off(void);
-extern void lockdep_on(void);
+static inline void lockdep_off(void)
+{
+ current->lockdep_recursion++;
+}
+
+static inline void lockdep_on(void)
+{
+ current->lockdep_recursion--;
+}
/*
* These methods are used by specific locking variants (spinlocks,
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index e406c5fdb41e..da51ed1c0c21 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -317,18 +317,6 @@ static inline u64 iterate_chain_key(u64 key, u32 idx)
return k0 | (u64)k1 << 32;
}
-void lockdep_off(void)
-{
- current->lockdep_recursion++;
-}
-EXPORT_SYMBOL(lockdep_off);
-
-void lockdep_on(void)
-{
- current->lockdep_recursion--;
-}
-EXPORT_SYMBOL(lockdep_on);
-
/*
* Debugging switches:
*/