[PATCH v2 2/2] sched/topology: Optimize topology_span_sane()

From: Kyle Meyer
Date: Wed Apr 10 2024 - 17:34:22 EST


Optimize topology_span_sane() by removing duplicate comparisons.

Since topology_span_sane() is called inside of for_each_cpu(), each
pervious CPU has already been compared against every other CPU. The
current CPU only needs to be compared against higher-numbered CPUs.

The total number of comparisons is reduced from N * (N - 1) to
N * (N - 1) / 2 on each non-NUMA scheduling domain level.

Signed-off-by: Kyle Meyer <kyle.meyer@xxxxxxx>
Reviewed-by: Yury Norov <yury.norov@xxxxxxxxx>
Acked-by: Vincent Guittot <vincent.guittot@xxxxxxxxxx>
---
kernel/sched/topology.c | 6 ++----
1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
index 99ea5986038c..b6bcafc09969 100644
--- a/kernel/sched/topology.c
+++ b/kernel/sched/topology.c
@@ -2347,7 +2347,7 @@ static struct sched_domain *build_sched_domain(struct sched_domain_topology_leve
static bool topology_span_sane(struct sched_domain_topology_level *tl,
const struct cpumask *cpu_map, int cpu)
{
- int i;
+ int i = cpu + 1;

/* NUMA levels are allowed to overlap */
if (tl->flags & SDTL_OVERLAP)
@@ -2359,9 +2359,7 @@ static bool topology_span_sane(struct sched_domain_topology_level *tl,
* breaking the sched_group lists - i.e. a later get_group() pass
* breaks the linking done for an earlier span.
*/
- for_each_cpu(i, cpu_map) {
- if (i == cpu)
- continue;
+ for_each_cpu_from(i, cpu_map) {
/*
* We should 'and' all those masks with 'cpu_map' to exactly
* match the topology we're about to build, but that can only
--
2.44.0