Re: [PATCH v2] sched/ext: Add cpumask to skip unsuitable dispatch queues

From: Tejun Heo

Date: Wed Feb 04 2026 - 15:35:18 EST


On Wed, Feb 04, 2026 at 04:34:18AM -0500, Qiliang Yuan wrote:
> Add a cpus_allowed cpumask to struct scx_dispatch_q to track the union
> of affinity masks for all tasks enqueued in a user-defined DSQ. This
> allows a CPU to quickly skip DSQs that contain no tasks runnable on the
> current CPU, avoiding wasteful O(N) scans.
>
> - Allocate/free cpus_allowed only for user-defined DSQs.
> - Use free_dsq_rcu_callback to safely free the DSQ and its nested mask.
> - Update the mask in dispatch_enqueue() using cpumask_copy() for the
> first task and cpumask_or() for subsequent ones. Skip updates if the
> mask is already full.
> - Update the DSQ mask in set_cpus_allowed_scx() when a task's affinity
> changes while enqueued.
> - Handle allocation failures in scx_create_dsq() to prevent memory leaks.
>
> This optimization improves performance with many DSQs and tight affinity
> constraints. The bitwise overhead is significantly lower than potential
> cache misses during task iteration.
>
> Signed-off-by: Qiliang Yuan <yuanql9@xxxxxxxxxxxxxxx>
> Signed-off-by: Qiliang Yuan <realwujing@xxxxxxxxx>

As Emil pointed out earlier, this adds overhead in general path which scales
with the number of CPUs and the benefit isn't that generic. Similar
optimizations can be done from BPF side and throwing a lot of tasks with
varying affinity restrictions into a single queue frequently scanned by
multiple CPUs is not scalable to begin with.

Thanks.

--
tejun