Re: [RFC][PATCH] sched/ext: Split curr|donor references properly

From: Andrea Righi

Date: Sun Dec 07 2025 - 03:54:37 EST


On Fri, Dec 05, 2025 at 09:47:24PM -0500, Joel Fernandes wrote:
> On Sat, Dec 06, 2025 at 12:14:45AM +0000, John Stultz wrote:
> > With proxy-exec, we want to do the accounting against the donor
> > most of the time. Without proxy-exec, there should be no
> > difference as the rq->donor and rq->curr are the same.
> >
> > So rework the logic to reference the rq->donor where appropriate.
> >
> > Also add donor info to scx_dump_state()
> >
> > Since CONFIG_SCHED_PROXY_EXEC currently depends on
> > !CONFIG_SCHED_CLASS_EXT, this should have no effect
> > (other then the extra donor output in scx_dump_state),
> > but this is one step needed to eventually remove that
> > constraint for proxy-exec.
> >
> > Just wanted to send this out for early review prior to LPC.
> >
> > Feedback or thoughts would be greatly appreciated!
>
> Hi John,
>
> I'm wondering if this will work well for BPF tasks because my understanding
> is that some scheduler BPF programs also monitor runtime statistics. If they are unaware of proxy execution, how will it work?

Right, some schedulers are relying on p->scx.slice to evaluate task
runtime. It'd be nice for the BPF schedulers to be aware of the donor.

>
> I don't see any code in the patch that passes the donor information to the
> BPF ops, for instance. I would really like the SCX folks to chime in before
> we can move this patch forward. Thanks for marking it as an RFC.
>
> We need to get a handle on how a scheduler BPF program will pass information
> about the donor to the currently executing task. If we can make this happen
> transparently, that's ideal. Otherwise, we may have to pass both the donor
> task and the currently executing task to the BPF ops.

That's what I was thinking, callbacks like ops.running(), ops.tick() and
ops.stopping() should probably have a struct task_struct *donor argument in
addition to struct task_struct *p. Then the BPF scheduler can decide how to
use the donor information (this would address also the runtime evaluation).

Thanks,
-Andrea

>
> Thanks,
>
> - Joel
>
>
> >
> > Signed-off-by: John Stultz <jstultz@xxxxxxxxxx>
> > ---
> > Cc: Joel Fernandes <joelaf@xxxxxxxxxx>
> > Cc: Qais Yousef <qyousef@xxxxxxxxxxx>
> > Cc: Ingo Molnar <mingo@xxxxxxxxxx>
> > Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
> > Cc: Juri Lelli <juri.lelli@xxxxxxxxxx>
> > Cc: Vincent Guittot <vincent.guittot@xxxxxxxxxx>
> > Cc: Dietmar Eggemann <dietmar.eggemann@xxxxxxx>
> > Cc: Valentin Schneider <vschneid@xxxxxxxxxx>
> > Cc: Steven Rostedt <rostedt@xxxxxxxxxxx>
> > Cc: Ben Segall <bsegall@xxxxxxxxxx>
> > Cc: Zimuzo Ezeozue <zezeozue@xxxxxxxxxx>
> > Cc: Mel Gorman <mgorman@xxxxxxx>
> > Cc: Will Deacon <will@xxxxxxxxxx>
> > Cc: Waiman Long <longman@xxxxxxxxxx>
> > Cc: Boqun Feng <boqun.feng@xxxxxxxxx>
> > Cc: "Paul E. McKenney" <paulmck@xxxxxxxxxx>
> > Cc: Metin Kaya <Metin.Kaya@xxxxxxx>
> > Cc: Xuewen Yan <xuewen.yan94@xxxxxxxxx>
> > Cc: K Prateek Nayak <kprateek.nayak@xxxxxxx>
> > Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
> > Cc: Daniel Lezcano <daniel.lezcano@xxxxxxxxxx>
> > Cc: Tejun Heo <tj@xxxxxxxxxx>
> > Cc: David Vernet <void@xxxxxxxxxxxxx>
> > Cc: Andrea Righi <arighi@xxxxxxxxxx>
> > Cc: Changwoo Min <changwoo@xxxxxxxxxx>
> > Cc: sched-ext@xxxxxxxxxxxxxxx
> > Cc: kernel-team@xxxxxxxxxxx
> > ---
> > kernel/sched/ext.c | 31 +++++++++++++++++--------------
> > 1 file changed, 17 insertions(+), 14 deletions(-)
> >
> > diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c
> > index 05f5a49e9649a..446091cba4429 100644
> > --- a/kernel/sched/ext.c
> > +++ b/kernel/sched/ext.c
> > @@ -938,17 +938,17 @@ static void touch_core_sched_dispatch(struct rq *rq, struct task_struct *p)
> >
> > static void update_curr_scx(struct rq *rq)
> > {
> > - struct task_struct *curr = rq->curr;
> > + struct task_struct *donor = rq->donor;
> > s64 delta_exec;
> >
> > delta_exec = update_curr_common(rq);
> > if (unlikely(delta_exec <= 0))
> > return;
> >
> > - if (curr->scx.slice != SCX_SLICE_INF) {
> > - curr->scx.slice -= min_t(u64, curr->scx.slice, delta_exec);
> > - if (!curr->scx.slice)
> > - touch_core_sched(rq, curr);
> > + if (donor->scx.slice != SCX_SLICE_INF) {
> > + donor->scx.slice -= min_t(u64, donor->scx.slice, delta_exec);
> > + if (!donor->scx.slice)
> > + touch_core_sched(rq, donor);
> > }
> > }
> >
> > @@ -1090,14 +1090,14 @@ static void dispatch_enqueue(struct scx_sched *sch, struct scx_dispatch_q *dsq,
> > struct rq *rq = container_of(dsq, struct rq, scx.local_dsq);
> > bool preempt = false;
> >
> > - if ((enq_flags & SCX_ENQ_PREEMPT) && p != rq->curr &&
> > - rq->curr->sched_class == &ext_sched_class) {
> > - rq->curr->scx.slice = 0;
> > + if ((enq_flags & SCX_ENQ_PREEMPT) && p != rq->donor &&
> > + rq->donor->sched_class == &ext_sched_class) {
> > + rq->donor->scx.slice = 0;
> > preempt = true;
> > }
> >
> > if (preempt || sched_class_above(&ext_sched_class,
> > - rq->curr->sched_class))
> > + rq->donor->sched_class))
> > resched_curr(rq);
> > } else {
> > raw_spin_unlock(&dsq->lock);
> > @@ -2001,7 +2001,7 @@ static void dispatch_to_local_dsq(struct scx_sched *sch, struct rq *rq,
> > }
> >
> > /* if the destination CPU is idle, wake it up */
> > - if (sched_class_above(p->sched_class, dst_rq->curr->sched_class))
> > + if (sched_class_above(p->sched_class, dst_rq->donor->sched_class))
> > resched_curr(dst_rq);
> > }
> >
> > @@ -2424,7 +2424,7 @@ static struct task_struct *first_local_task(struct rq *rq)
> > static struct task_struct *
> > do_pick_task_scx(struct rq *rq, struct rq_flags *rf, bool force_scx)
> > {
> > - struct task_struct *prev = rq->curr;
> > + struct task_struct *prev = rq->donor;
> > bool keep_prev, kick_idle = false;
> > struct task_struct *p;
> >
> > @@ -3093,7 +3093,7 @@ int scx_check_setscheduler(struct task_struct *p, int policy)
> > #ifdef CONFIG_NO_HZ_FULL
> > bool scx_can_stop_tick(struct rq *rq)
> > {
> > - struct task_struct *p = rq->curr;
> > + struct task_struct *p = rq->donor;
> >
> > if (scx_rq_bypassing(rq))
> > return false;
> > @@ -4587,6 +4587,9 @@ static void scx_dump_state(struct scx_exit_info *ei, size_t dump_len)
> > dump_line(&ns, " curr=%s[%d] class=%ps",
> > rq->curr->comm, rq->curr->pid,
> > rq->curr->sched_class);
> > + dump_line(&ns, " donor=%s[%d] class=%ps",
> > + rq->donor->comm, rq->donor->pid,
> > + rq->donor->sched_class);
> > if (!cpumask_empty(rq->scx.cpus_to_kick))
> > dump_line(&ns, " cpus_to_kick : %*pb",
> > cpumask_pr_args(rq->scx.cpus_to_kick));
> > @@ -5426,7 +5429,7 @@ static bool kick_one_cpu(s32 cpu, struct rq *this_rq, unsigned long *ksyncs)
> > unsigned long flags;
> >
> > raw_spin_rq_lock_irqsave(rq, flags);
> > - cur_class = rq->curr->sched_class;
> > + cur_class = rq->donor->sched_class;
> >
> > /*
> > * During CPU hotplug, a CPU may depend on kicking itself to make
> > @@ -5438,7 +5441,7 @@ static bool kick_one_cpu(s32 cpu, struct rq *this_rq, unsigned long *ksyncs)
> > !sched_class_above(cur_class, &ext_sched_class)) {
> > if (cpumask_test_cpu(cpu, this_scx->cpus_to_preempt)) {
> > if (cur_class == &ext_sched_class)
> > - rq->curr->scx.slice = 0;
> > + rq->donor->scx.slice = 0;
> > cpumask_clear_cpu(cpu, this_scx->cpus_to_preempt);
> > }
> >
> > --
> > 2.52.0.223.gf5cc29aaa4-goog
> >