Re: [PATCH v2] sched/topology: improve topology_span_sane speed
From: Valentin Schneider
Date: Mon Dec 16 2024 - 10:39:30 EST
Hi,
Sorry it took me this long to get to this.
On 31/10/24 15:04, Steve Wahl wrote:
> Use a different approach to topology_span_sane(), that checks for the
> same constraint of no partial overlaps for any two CPU sets for
> non-NUMA topology levels, but does so in a way that is O(N) rather
> than O(N^2).
>
> Instead of comparing with all other masks to detect collisions, keep
> one mask that includes all CPUs seen so far and detect collisions with
> a single cpumask_intersects test.
>
> If the current mask has no collisions with previously seen masks, it
> should be a new mask, which can be uniquely identified ("id") by the
> lowest bit set in this mask. Mark that we've seen a mask with this
> id, and add the CPUs in this mask to the list of those seen.
>
> If the current mask does collide with previously seen masks, it should
> be exactly equal to a mask seen before, identified once again by the
> lowest bit the current mask has set. It's an error if we haven't seen
> a mask with that id, or if the current mask doesn't match the one we
> get by looking up that id.
>
> Move the topology_span_sane() check out of the existing topology level
> loop, let it do its own looping to match the needs of this algorithm.
>
> On a system with 1920 processors (16 sockets, 60 cores, 2 threads),
> the average time to take one processor offline is reduced from 2.18
> seconds to 1.01 seconds. (Off-lining 959 of 1920 processors took
> 34m49.765s without this change, 16m10.038s with this change in place.)
>
> Signed-off-by: Steve Wahl <steve.wahl@xxxxxxx>
> ---
>
> Version 2: Adopted suggestion by K Prateek Nayak that removes an array and
> simplifies the code, and eliminates the erroneous use of
> num_possible_cpus() that Peter Zijlstra noted.
>
> Version 1 discussion:
> https://lore.kernel.org/all/20241010155111.230674-1-steve.wahl@xxxxxxx/
>
> kernel/sched/topology.c | 73 +++++++++++++++++++++++++++--------------
> 1 file changed, 48 insertions(+), 25 deletions(-)
>
> diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
> index 9748a4c8d668..6a2a3e91d59e 100644
> --- a/kernel/sched/topology.c
> +++ b/kernel/sched/topology.c
> @@ -2356,35 +2356,58 @@ static struct sched_domain *build_sched_domain(struct sched_domain_topology_leve
>
> /*
> * Ensure topology masks are sane, i.e. there are no conflicts (overlaps) for
> - * any two given CPUs at this (non-NUMA) topology level.
> + * any two given CPUs on non-NUMA topology levels.
> */
> -static bool topology_span_sane(struct sched_domain_topology_level *tl,
> - const struct cpumask *cpu_map, int cpu)
> +static bool topology_span_sane(const struct cpumask *cpu_map)
> {
> - int i = cpu + 1;
> + struct sched_domain_topology_level *tl;
> + struct cpumask *covered, *id_seen;
> + int cpu;
>
> - /* NUMA levels are allowed to overlap */
> - if (tl->flags & SDTL_OVERLAP)
> - return true;
> + lockdep_assert_held(&sched_domains_mutex);
> + covered = sched_domains_tmpmask;
> + id_seen = sched_domains_tmpmask2;
> +
> + for_each_sd_topology(tl) {
> +
> + /* NUMA levels are allowed to overlap */
> + if (tl->flags & SDTL_OVERLAP)
> + continue;
> +
> + cpumask_clear(covered);
> + cpumask_clear(id_seen);
>
> - /*
> - * Non-NUMA levels cannot partially overlap - they must be either
> - * completely equal or completely disjoint. Otherwise we can end up
> - * breaking the sched_group lists - i.e. a later get_group() pass
> - * breaks the linking done for an earlier span.
> - */
> - for_each_cpu_from(i, cpu_map) {
> /*
> - * We should 'and' all those masks with 'cpu_map' to exactly
> - * match the topology we're about to build, but that can only
> - * remove CPUs, which only lessens our ability to detect
> - * overlaps
> + * Non-NUMA levels cannot partially overlap - they must be either
> + * completely equal or completely disjoint. Otherwise we can end up
> + * breaking the sched_group lists - i.e. a later get_group() pass
> + * breaks the linking done for an earlier span.
> */
> - if (!cpumask_equal(tl->mask(cpu), tl->mask(i)) &&
> - cpumask_intersects(tl->mask(cpu), tl->mask(i)))
> - return false;
> + for_each_cpu(cpu, cpu_map) {
> + const struct cpumask *tl_cpu_mask = tl->mask(cpu);
> + int id;
> +
> + /* lowest bit set in this mask is used as a unique id */
> + id = cpumask_first(tl_cpu_mask);
> +
Using the first CPU like we do in get_group() is clever; if we've seen the
first CPU then the spans must exactly match; else it must be a completely
new span.
> + /* if this mask doesn't collide with what we've already seen */
> + if (!cpumask_intersects(tl_cpu_mask, covered)) {
> + /* Really odd case when cpu != id, likely not sane */
> + if ((cpu != id) && !cpumask_equal(tl_cpu_mask, tl->mask(id)))
> + return false;
> + if (cpumask_test_and_set_cpu(id, id_seen))
> + return false;
> + cpumask_or(covered, tl_cpu_mask, covered);
> + } else if ((!cpumask_test_cpu(id, id_seen)) ||
> + !cpumask_equal(tl->mask(id), tl_cpu_mask)) {
> + /*
> + * a collision with covered should have exactly matched
> + * a previously seen mask with the same id
> + */
> + return false;
> + }
Per the previous comment, if you hinge the condition on @id itself,
couldn't you "simplify" it as below?
if (cpumask_test_cpu(id, id_seen)) {
/* First CPU has already been seen, ensure identical spans */
if (!cpumask_equal(tl->mask(id), tl_cpu_mask))
return false;
} else {
/* First CPU hasn't been seen before, ensure it's a completely new span */
if (cpumask_intersects(tl_cpu_mask, covered))
return false;
cpumask_or(covered, covered, tl_cpu_mask);
cpumask_set_cpu(id, id_seen);
}