Re: [PATCH 2/3] sched_ext: Introduce per-NUMA idle cpumasks

From: Tejun Heo
Date: Tue Dec 03 2024 - 19:04:21 EST


Hello,

On Tue, Dec 03, 2024 at 04:36:11PM +0100, Andrea Righi wrote:
...
> Probably a better way to solve this issue is to introduce new kfunc's to
> explicitly select specific per-NUMA cpumask and modify the scx
> schedulers to transition to this new API, for example:
>
> const struct cpumask *scx_bpf_get_idle_numa_cpumask(int node)
> const struct cpumask *scx_bpf_get_idle_numa_smtmask(int node)

Yeah, I don't think we want to break backward compat here. Can we introduce
a flag to switch between node-aware and flattened logic and trigger ops
error if the wrong flavor is used? Then, we can deprecate and drop the old
behavior after a few releases. Also, I think it can be named
scx_bpf_get_idle_cpumask_node().

> +static struct cpumask *get_idle_cpumask(int cpu)
> +{
> + int node = cpu_to_node(cpu);
> +
> + return idle_masks[node]->cpu;
> +}
> +
> +static struct cpumask *get_idle_smtmask(int cpu)
> +{
> + int node = cpu_to_node(cpu);
> +
> + return idle_masks[node]->smt;
> +}

Hmm... why are they keyed by cpu? Wouldn't it make more sense to key them by
node?

> +static s32 scx_pick_idle_cpu(const struct cpumask *cpus_allowed, u64 flags)
> +{
> + int start = cpu_to_node(smp_processor_id());
> + int node, cpu;
> +
> + for_each_node_state_wrap(node, N_ONLINE, start) {
> + /*
> + * scx_pick_idle_cpu_from_node() can be expensive and redundant
> + * if none of the CPUs in the NUMA node can be used (according
> + * to cpus_allowed).
> + *
> + * Therefore, check if the NUMA node is usable in advance to
> + * save some CPU cycles.
> + */
> + if (!cpumask_intersects(cpumask_of_node(node), cpus_allowed))
> + continue;
> + cpu = scx_pick_idle_cpu_from_node(node, cpus_allowed, flags);
> + if (cpu >= 0)
> + return cpu;

This is fine for now but it'd be ideal if the iteration is in inter-node
distance order so that each CPU radiates from local node to the furthest
ones.

Thanks.

--
tejun