Re: hotplug lockdep splat (tip)

From: Peter Zijlstra
Date: Mon Sep 04 2017 - 10:23:27 EST


On Mon, Sep 04, 2017 at 09:55:02AM +0200, Peter Zijlstra wrote:
> On Sun, Sep 03, 2017 at 08:59:35AM +0200, Mike Galbraith wrote:
> >
> > [ 126.626908] Unregister pv shared memory for cpu 1
> > [ 126.631041]
> > [ 126.631269] ======================================================
> > [ 126.632442] WARNING: possible circular locking dependency detected
> > [ 126.633599] 4.13.0.g06260ca-tip-lockdep #2 Tainted: G E
> > [ 126.634380] ------------------------------------------------------
> > [ 126.635124] stress-cpu-hotp/3156 is trying to acquire lock:
> > [ 126.635804] ((complete)&st->done){+.+.}, at: [<ffffffff8108d19a>] takedown_cpu+0x8a/0xf0
> > [ 126.636809]
> > [ 126.636809] but task is already holding lock:
> > [ 126.637567] (sparse_irq_lock){+.+.}, at: [<ffffffff81107ac7>] irq_lock_sparse+0x17/0x20
> > [ 126.638665]
>
> https://lkml.kernel.org/r/20170829193416.GC32112@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
>
> I still need to write a coherent Changelog and comments for that :/

How's this?

---
Subject: smp/hotplug,lockdep: Annotate st->done

With the new lockdep cross-release feature, cpu hotplug reports the
following deadlock:

takedown_cpu()
irq_lock_sparse()
wait_for_completion(&st->done)

cpuhp_thread_fun
cpuhp_up_callback
cpuhp_invoke_callback
irq_affinity_online_cpu
irq_local_spare()
irq_unlock_sparse()
complete(&st->done)

However, CPU-up and CPU-down are globally serialized, so the above
scenario cannot in fact happen.

Annotate this by splitting the st->done dependency chain for up and
down.

Signed-off-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx>
---
kernel/cpu.c | 35 +++++++++++++++++++++++++++++------
1 file changed, 29 insertions(+), 6 deletions(-)

diff --git a/kernel/cpu.c b/kernel/cpu.c
index acf5308fad51..0f44ddf64f24 100644
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -533,6 +533,28 @@ void __init cpuhp_threads_init(void)
kthread_unpark(this_cpu_read(cpuhp_state.thread));
}

+/*
+ * _cpu_down() and _cpu_up() have different lock ordering wrt st->done, but
+ * because these two functions are globally serialized and st->done is private
+ * to them, we can simply re-init st->done for each of them to separate the
+ * lock chains.
+ *
+ * Must be macro to ensure we have two different call sites.
+ */
+#ifdef CONFIG_LOCKDEP
+#define lockdep_reinit_st_done() \
+do { \
+ int __cpu; \
+ for_each_possible_cpu(__cpu) { \
+ struct cpuhp_cpu_state *st = \
+ per_cpu_ptr(&cpuhp_state, __cpu); \
+ init_completion(&st->done); \
+ } \
+} while(0)
+#else
+#define lockdep_reinit_st_done()
+#endif
+
#ifdef CONFIG_HOTPLUG_CPU
/**
* clear_tasks_mm_cpumask - Safely clear tasks' mm_cpumask for a CPU
@@ -676,12 +698,6 @@ void cpuhp_report_idle_dead(void)
cpuhp_complete_idle_dead, st, 0);
}

-#else
-#define takedown_cpu NULL
-#endif
-
-#ifdef CONFIG_HOTPLUG_CPU
-
/* Requires cpu_add_remove_lock to be held */
static int __ref _cpu_down(unsigned int cpu, int tasks_frozen,
enum cpuhp_state target)
@@ -697,6 +713,8 @@ static int __ref _cpu_down(unsigned int cpu, int tasks_frozen,

cpus_write_lock();

+ lockdep_reinit_st_done();
+
cpuhp_tasks_frozen = tasks_frozen;

prev_state = st->state;
@@ -759,6 +777,9 @@ int cpu_down(unsigned int cpu)
return do_cpu_down(cpu, CPUHP_OFFLINE);
}
EXPORT_SYMBOL(cpu_down);
+
+#else
+#define takedown_cpu NULL
#endif /*CONFIG_HOTPLUG_CPU*/

/**
@@ -806,6 +827,8 @@ static int _cpu_up(unsigned int cpu, int tasks_frozen, enum cpuhp_state target)

cpus_write_lock();

+ lockdep_reinit_st_done();
+
if (!cpu_present(cpu)) {
ret = -EINVAL;
goto out;