Re: [PATCH v2] sched/topology: Allow sched_asym_cpucapacity to be disabled

From: Quentin Perret
Date: Tue Oct 15 2019 - 07:56:40 EST


On Tuesday 15 Oct 2019 at 12:49:22 (+0100), Valentin Schneider wrote:
>
>
> On 15/10/2019 11:58, Valentin Schneider wrote:
> > On 15/10/2019 11:40, Quentin Perret wrote:
> >>> @@ -2124,8 +2124,17 @@ static void detach_destroy_domains(const struct cpumask *cpu_map)
> >>> int i;
> >>>
> >>> rcu_read_lock();
> >>> +
> >>> + if (static_key_enabled(&sched_asym_cpucapacity)) {
> >>> + unsigned int cpu = cpumask_any(cpu_map);
> >>> +
> >>> + if (rcu_dereference(per_cpu(sd_asym_cpucapacity, cpu)))
> >>> + static_branch_dec_cpuslocked(&sched_asym_cpucapacity);
> >>
> >> Lockdep should scream for this :)
> >
> > Bleh, yes indeed...
> >
>
> Urgh, I forgot about the funny hotplug lock scenario at boot time.
> rebuild_sched_domains() takes the lock but sched_init_domains() doesn't, so
> we don't get the might_sleep warn at boot time.
>
> So if we want to flip the key post boot time we probably need to separately
> count our asymmetric root domains and flip the key after all the rebuilds,
> outside of the hotplug lock.

Hmm, a problem here is that static_branch*() can block (it uses a
mutex) while you're in the rcu section, I think.

I suppose you could just move this above rcu_read_lock() and use
rcu_access_pointer() instead ?

Thanks,
Quentin