Re: [PATCHv7 2/2] sched/deadline: Walk up cpuset hierarchy to decide root domain when hot-unplug
From: Juri Lelli
Date: Fri Nov 21 2025 - 08:05:39 EST
Hi!
On 19/11/25 17:55, Pingfan Liu wrote:
...
> +/* Access rule: must be called on local CPU with preemption disabled */
> static DEFINE_PER_CPU(cpumask_var_t, local_cpu_mask_dl);
...
> +/* The caller should hold cpuset_mutex */
Maybe we can add a lockdep explicit check?
> void dl_add_task_root_domain(struct task_struct *p)
> {
> struct rq_flags rf;
> struct rq *rq;
> struct dl_bw *dl_b;
> + unsigned int cpu;
> + struct cpumask *msk = this_cpu_cpumask_var_ptr(local_cpu_mask_dl);
Can this corrupt local_cpu_mask_dl?
Without preemption being disabled, the following race can occur:
1. Thread calls dl_add_task_root_domain() on CPU 0
2. Gets pointer to CPU 0's local_cpu_mask_dl
3. Thread is preempted and migrated to CPU 1
4. Thread continues using CPU 0's local_cpu_mask_dl
5. Meanwhile, the scheduler on CPU 0 calls find_later_rq() which also
uses local_cpu_mask_dl (with preemption properly disabled)
6. Both contexts now corrupt the same per-CPU buffer concurrently
>
> raw_spin_lock_irqsave(&p->pi_lock, rf.flags);
It's safe to get the pointer after this point.
> if (!dl_task(p) || dl_entity_is_special(&p->dl)) {
> @@ -2919,16 +2952,25 @@ void dl_add_task_root_domain(struct task_struct *p)
> return;
> }
>
> - rq = __task_rq_lock(p, &rf);
> -
> + /*
> + * Get an active rq, whose rq->rd traces the correct root
> + * domain.
> + * Ideally this would be under cpuset reader lock until rq->rd is
> + * fetched. However, sleepable locks cannot nest inside pi_lock, so we
> + * rely on the caller of dl_add_task_root_domain() holds 'cpuset_mutex'
> + * to guarantee the CPU stays in the cpuset.
> + */
> + dl_get_task_effective_cpus(p, msk);
> + cpu = cpumask_first_and(cpu_active_mask, msk);
> + BUG_ON(cpu >= nr_cpu_ids);
> + rq = cpu_rq(cpu);
> dl_b = &rq->rd->dl_bw;
> - raw_spin_lock(&dl_b->lock);
> + /* End of fetching rd */
Not sure we need this comment above. :)
> + raw_spin_lock(&dl_b->lock);
> __dl_add(dl_b, p->dl.dl_bw, cpumask_weight(rq->rd->span));
> -
> raw_spin_unlock(&dl_b->lock);
> -
> - task_rq_unlock(rq, p, &rf);
> + raw_spin_unlock_irqrestore(&p->pi_lock, rf.flags);
> }
Thanks,
Juri