Re: [PATCH v3] sched_ext: Fix lock imbalance in dispatch_to_local_dsq()

From: Andrea Righi
Date: Sat Jan 25 2025 - 03:36:04 EST


On Sat, Jan 25, 2025 at 07:56:08AM +0100, Andrea Righi wrote:
...
> @@ -2557,6 +2567,7 @@ static void dispatch_to_local_dsq(struct rq *rq, struct scx_dispatch_q *dst_dsq,
> {
> struct rq *src_rq = task_rq(p);
> struct rq *dst_rq = container_of(dst_dsq, struct rq, scx.local_dsq);
> + struct rq *locked_rq = rq;

I just noticed that we have an unused variable here with !CONFIG_SMP, so
ignore this. I'll send a new version soon.

Sorry for the noise.
-Andrea

>
> /*
> * We're synchronized against dequeue through DISPATCHING. As @p can't
> @@ -2593,12 +2604,16 @@ static void dispatch_to_local_dsq(struct rq *rq, struct scx_dispatch_q *dst_dsq,
> atomic_long_set_release(&p->scx.ops_state, SCX_OPSS_NONE);
>
> /* switch to @src_rq lock */
> - if (rq != src_rq) {
> - raw_spin_rq_unlock(rq);
> + if (locked_rq != src_rq) {
> + raw_spin_rq_unlock(locked_rq);
> + locked_rq = src_rq;
> raw_spin_rq_lock(src_rq);
> }
>
> - /* task_rq couldn't have changed if we're still the holding cpu */
> + /*
> + * If p->scx.holding_cpu still matches the current CPU, task_rq(p)
> + * has not changed and we can safely move the task to @dst_rq.
> + */
> if (likely(p->scx.holding_cpu == raw_smp_processor_id()) &&
> !WARN_ON_ONCE(src_rq != task_rq(p))) {
> /*