Re: [patch V2 08/28] sched/smt: Make sched_smt_present track topology
From: Konrad Rzeszutek Wilk
Date: Thu Nov 29 2018 - 09:43:40 EST
On Sun, Nov 25, 2018 at 07:33:36PM +0100, Thomas Gleixner wrote:
> Currently the 'sched_smt_present' static key is enabled when at CPU bringup
> SMT topology is observed, but it is never disabled. However there is demand
> to also disable the key when the topology changes such that there is no SMT
> present anymore.
>
> Implement this by making the key count the number of cores that have SMT
> enabled.
>
> In particular, the SMT topology bits are set before interrrupts are enabled
> and similarly, are cleared after interrupts are disabled for the last time
> and the CPU dies.
I see that the number you used is '2', but I thought that there are some
CPUs out there (Knights Landing?) that could have four threads?
Would it be better to have a generic function that would provide the
amount of threads the platform does expose - and use that instead
of a constant value?
>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx>
> Signed-off-by: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
>
> ---
> kernel/sched/core.c | 19 +++++++++++--------
> 1 file changed, 11 insertions(+), 8 deletions(-)
>
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -5738,15 +5738,10 @@ int sched_cpu_activate(unsigned int cpu)
>
> #ifdef CONFIG_SCHED_SMT
> /*
> - * The sched_smt_present static key needs to be evaluated on every
> - * hotplug event because at boot time SMT might be disabled when
> - * the number of booted CPUs is limited.
> - *
> - * If then later a sibling gets hotplugged, then the key would stay
> - * off and SMT scheduling would never be functional.
> + * When going up, increment the number of cores with SMT present.
> */
> - if (cpumask_weight(cpu_smt_mask(cpu)) > 1)
> - static_branch_enable_cpuslocked(&sched_smt_present);
> + if (cpumask_weight(cpu_smt_mask(cpu)) == 2)
> + static_branch_inc_cpuslocked(&sched_smt_present);
> #endif
> set_cpu_active(cpu, true);
>
> @@ -5790,6 +5785,14 @@ int sched_cpu_deactivate(unsigned int cp
> */
> synchronize_rcu_mult(call_rcu, call_rcu_sched);
>
> +#ifdef CONFIG_SCHED_SMT
> + /*
> + * When going down, decrement the number of cores with SMT present.
> + */
> + if (cpumask_weight(cpu_smt_mask(cpu)) == 2)
> + static_branch_dec_cpuslocked(&sched_smt_present);
> +#endif
> +
> if (!sched_smp_initialized)
> return 0;
>
>
>