[PATCH] lib/spinlock_debug: Prevent unnecessary recursive spin_dump()

From: Byungchul Park
Date: Wed Mar 23 2016 - 22:37:25 EST


Printing "lockup suspected" for the same lock more than once is
meaningless. Furtheremore, it can cause an indefinite recursion if it
occures within a printk(). For example,

printk
spin_lock(A)
spin_dump // lockup suspected for A
printk
spin_lock(A)
spin_dump // lockup suspected for A
... indefinitely

where "A" can be any lock which is used within printk().

The recursion can be stopped if the lock causing the lockup is released.
However this warning messages repeated and accumulated unnecessarily
might eat off the printk log buffer. And it also consumes away the cpu
time unnecessarily. We have to avoid this situation.

Of course this patch cannot detect the recursion perfectly. In a rare
case, it can print the "lockup suspected" message recursively several
times. But at least we can detect and prevent unnecessary indefinite
recursion without missing any important printing. Detecting perfectly
requires to make the code more complex and use more memory. Who care?

Signed-off-by: Byungchul Park <byungchul.park@xxxxxxx>
---
kernel/locking/spinlock_debug.c | 31 ++++++++++++++++++++++++++++++-
1 file changed, 30 insertions(+), 1 deletion(-)

diff --git a/kernel/locking/spinlock_debug.c b/kernel/locking/spinlock_debug.c
index 0374a59..653eea9 100644
--- a/kernel/locking/spinlock_debug.c
+++ b/kernel/locking/spinlock_debug.c
@@ -103,6 +103,31 @@ static inline void debug_spin_unlock(raw_spinlock_t *lock)
lock->owner_cpu = -1;
}

+static raw_spinlock_t *sus_lock;
+static unsigned int sus_cpu = -1;
+static pid_t sus_pid = -1;
+
+static inline void enter_lockup_suspected(raw_spinlock_t *lock)
+{
+ sus_lock = lock;
+ sus_cpu = raw_smp_processor_id();
+ sus_pid = task_pid_nr(current);
+}
+
+static inline void exit_lockup_suspected(raw_spinlock_t *lock)
+{
+ sus_lock = NULL;
+ sus_cpu = -1;
+ sus_pid = -1;
+}
+
+static inline int detect_recursive_lockup_suspected(raw_spinlock_t *lock)
+{
+ return sus_lock == lock &&
+ sus_cpu == raw_smp_processor_id() &&
+ sus_pid == task_pid_nr(current);
+}
+
static void __spin_lock_debug(raw_spinlock_t *lock)
{
u64 i;
@@ -114,7 +139,11 @@ static void __spin_lock_debug(raw_spinlock_t *lock)
__delay(1);
}
/* lockup suspected: */
- spin_dump(lock, "lockup suspected");
+ if (likely(!detect_recursive_lockup_suspected(lock))) {
+ enter_lockup_suspected(lock);
+ spin_dump(lock, "lockup suspected");
+ exit_lockup_suspected(lock);
+ }
#ifdef CONFIG_SMP
trigger_all_cpu_backtrace();
#endif
--
1.9.1