[patch 08/15] sched, hotplug: Move sync_rcu to be with set_cpu_active(false)
From: Thomas Gleixner
Date: Thu Mar 10 2016 - 07:08:22 EST
From: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
The sync_rcu stuff is specificically for clearing bits in the active
mask, such that everybody will observe the bit cleared and will not
consider the cleared CPU for load-balancing etc.
Signed-off-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx>
Signed-off-by: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
---
kernel/cpu.c | 15 ---------------
kernel/sched/core.c | 14 ++++++++++++++
2 files changed, 14 insertions(+), 15 deletions(-)
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -691,21 +691,6 @@ static int takedown_cpu(unsigned int cpu
struct cpuhp_cpu_state *st = per_cpu_ptr(&cpuhp_state, cpu);
int err;
- /*
- * By now we've cleared cpu_active_mask, wait for all preempt-disabled
- * and RCU users of this state to go away such that all new such users
- * will observe it.
- *
- * For CONFIG_PREEMPT we have preemptible RCU and its sync_rcu() might
- * not imply sync_sched(), so wait for both.
- *
- * Do sync before park smpboot threads to take care the rcu boost case.
- */
- if (IS_ENABLED(CONFIG_PREEMPT))
- synchronize_rcu_mult(call_rcu, call_rcu_sched);
- else
- synchronize_rcu();
-
/* Park the hotplug thread */
kthread_park(per_cpu_ptr(&cpuhp_state, cpu)->thread);
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -7345,6 +7345,20 @@ int sched_cpu_deactivate(unsigned int cp
int ret;
set_cpu_active(cpu, false);
+ /*
+ * We've cleared cpu_active_mask, wait for all preempt-disabled and RCU
+ * users of this state to go away such that all new such users will
+ * observe it.
+ *
+ * For CONFIG_PREEMPT we have preemptible RCU and its sync_rcu() might
+ * not imply sync_sched(), so wait for both.
+ *
+ * Do sync before park smpboot threads to take care the rcu boost case.
+ */
+ if (IS_ENABLED(CONFIG_PREEMPT))
+ synchronize_rcu_mult(call_rcu, call_rcu_sched);
+ else
+ synchronize_rcu();
if (!sched_smp_initialized)
return 0;