Re: combinatorial explosion in lockdep

From: Hugh Dickins
Date: Mon Aug 04 2008 - 08:22:20 EST


On Sun, 3 Aug 2008, David Miller wrote:
>
> It's probably best to not later clear oops_in_progress when we trigger
> an event like this, to ensure that we do actually get any followon
> messages on the console.

Ah, so it was intentional that your patch set oops_in_progess without
ever clearing it again. Hmm. I think I'd reword your "probably best"
to "arguably best". If everything really locks up at this point, then
there is no point to clearing oops_in_progress after; but if the system
manages to resume (arguably) normal operation after, then leaving it
forever oops_in_progess worries me, and differs from current practice.

I think I'd rather clear it afterwards in any public patch; but edit
that out privately if it helps while debugging some particular problem.
I did try to reproduce my spinlock lockups yesterday, but without
success, so have no practical experience one way or the other.

But notice that I shouldn't be messing directly with oops_in_progress:
better to use bust_spinlocks() (oops_in_progress++/-- and wake klogd).

[PATCH] bust_spinlocks while reporting spinlock lockup

Use bust_spinlocks() while reporting spinlock lockup to avoid deadlock
inside printk() or the backtraces.

Signed-off-by: Hugh Dickins <hugh@xxxxxxxxxxx>
---

lib/spinlock_debug.c | 2 ++
1 file changed, 2 insertions(+)

--- 2.6.27-rc1/lib/spinlock_debug.c 2008-01-24 22:58:37.000000000 +0000
+++ linux/lib/spinlock_debug.c 2008-08-01 12:41:52.000000000 +0100
@@ -113,6 +113,7 @@ static void __spin_lock_debug(spinlock_t
/* lockup suspected: */
if (print_once) {
print_once = 0;
+ bust_spinlocks(1);
printk(KERN_EMERG "BUG: spinlock lockup on CPU#%d, "
"%s/%d, %p\n",
raw_smp_processor_id(), current->comm,
@@ -121,6 +122,7 @@ static void __spin_lock_debug(spinlock_t
#ifdef CONFIG_SMP
trigger_all_cpu_backtrace();
#endif
+ bust_spinlocks(0);
}
}
}
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/