Re: [tip: sched/core] sched: Add cluster scheduler level for x86

From: Barry Song
Date: Thu Oct 21 2021 - 06:32:52 EST


On Thu, Oct 21, 2021 at 9:43 AM Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote:
>
> On Wed, Oct 20, 2021 at 10:36:19PM +0200, Peter Zijlstra wrote:
>
> > OK, I think I see what's happening.
> >
> > AFAICT cacheinfo.c does *NOT* set l2c_id on AMD/Hygon hardware, this
> > means it's set to BAD_APICID.
> >
> > This then results in match_l2c() to never match. And as a direct
> > consequence set_cpu_sibling_map() will generate cpu_l2c_shared_mask with
> > just the one CPU set.
> >
> > And we have the above result and things come unstuck if we assume:
> > SMT <= L2 <= LLC
> >
> > Now, the big question, how to fix this... Does AMD have means of
> > actually setting l2c_id or should we fall back to using match_smt() for
> > l2c_id == BAD_APICID ?
>
> The latter looks something like the below and ought to make EPYC at
> least function as it did before.
>
>
> ---
> arch/x86/kernel/smpboot.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
> index 849159797101..c2671b2333d1 100644
> --- a/arch/x86/kernel/smpboot.c
> +++ b/arch/x86/kernel/smpboot.c
> @@ -472,7 +472,7 @@ static bool match_l2c(struct cpuinfo_x86 *c, struct cpuinfo_x86 *o)
>
> /* Do not match if we do not have a valid APICID for cpu: */
> if (per_cpu(cpu_l2c_id, cpu1) == BAD_APICID)
> - return false;
> + return match_smt(c, o); /* assume at least SMT shares L2 */

Rather than making a fake cluster_cpus and cluster_cpus_list which
will expose to userspace
through /sys/devices/cpus/cpux/topology, could we just fix the
sched_domain mask by the
below?
It will be odd to users that a cpu has BAD cluster_id but has
"meaningful" cluster_cpus and
cluster_cpus_list in sys.

diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
index 5094ab0bae58..0f9d706a7507 100644
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -687,6 +687,15 @@ const struct cpumask *cpu_coregroup_mask(int cpu)

const struct cpumask *cpu_clustergroup_mask(int cpu)
{
+ /*
+ * if L2(cluster) is not represented, make cluster sched_domain
+ * same with smt domain, so that this redundant sched_domain can
+ * be dropped and we can avoid the complaint "the SMT domain not
+ * a subset of the cluster domain"
+ */
+ if (cpumask_subset(cpu_l2c_shared_mask(cpu), cpu_smt_mask(cpu)))
+ return cpu_smt_mask(cpu);
+
return cpu_l2c_shared_mask(cpu);
}



>
> /* Do not match if L2 cache id does not match: */
> if (per_cpu(cpu_l2c_id, cpu1) != per_cpu(cpu_l2c_id, cpu2))

Thanks
Barry