Re: [PATCH] sched_ext: Fix missing rq lock in scx_bpf_cpuperf_set()
From: Andrea Righi
Date: Thu Mar 27 2025 - 03:56:22 EST
On Wed, Mar 26, 2025 at 02:24:16PM -1000, Tejun Heo wrote:
> Hello, Andrea.
>
> On Tue, Mar 25, 2025 at 03:00:21PM +0100, Andrea Righi wrote:
> > @@ -7114,12 +7114,22 @@ __bpf_kfunc void scx_bpf_cpuperf_set(s32 cpu, u32 perf)
> >
> > if (ops_cpu_valid(cpu, NULL)) {
> > struct rq *rq = cpu_rq(cpu);
> > + struct rq_flags rf;
> > + bool rq_unlocked;
> > +
> > + preempt_disable();
> > + rq_unlocked = (rq != this_rq()) || scx_kf_allowed_if_unlocked();
> > + if (rq_unlocked) {
> > + rq_lock_irqsave(rq, &rf);
>
> I don't think this is correct:
>
> - This is double-locking regardless of the locking order and thus can lead
> to ABBA deadlocks.
>
> - There's no guarantee that the locked rq is this_rq(). e.g. In wakeup path,
> the locked rq is on the CPU that the wakeup is targeting, not this_rq().
>
> Hmm... this is a bit tricky. SCX_CALL_OP*() always knows whether the rq is
> locked or not. We might as well pass it the currently locked rq and remember
> that in a percpu variable, so that scx_bpf_*() can always tell whether and
> which cpu is rq-locked currently. If unlocked, we can grab the rq lock. If
> the traget cpu is not the locked one, we can either fail the operation (and
> trigger ops error) or bounce it to an irq work.
Hm... that's right, it looks like this requires a bit more work than
expected, but saving the currently locked rq might be helpful also for
other kfuncs, I'll take a look at this.
Thanks!
-Andrea