Re: [RFC][PATCH v14 5/7] sched: Add an initial sketch of the find_proxy_task() function
From: Peter Zijlstra
Date: Tue Dec 17 2024 - 03:53:03 EST
On Mon, Dec 16, 2024 at 09:42:31PM -0800, John Stultz wrote:
> On Fri, Dec 13, 2024 at 4:06 PM Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote:
> >
> > On Mon, Nov 25, 2024 at 11:51:59AM -0800, John Stultz wrote:
> >
> > > diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> > > index f8714050b6d0d..b492506d33415 100644
> > > --- a/kernel/sched/core.c
> > > +++ b/kernel/sched/core.c
> > > @@ -5052,6 +5052,34 @@ static void do_balance_callbacks(struct rq *rq, struct balance_callback *head)
> > > }
> > > }
> > >
> > > +/*
> > > + * Only called from __schedule context
> > > + *
> > > + * There are some cases where we are going to re-do the action
> > > + * that added the balance callbacks. We may not be in a state
> > > + * where we can run them, so just zap them so they can be
> > > + * properly re-added on the next time around. This is similar
> > > + * handling to running the callbacks, except we just don't call
> > > + * them.
> > > + */
> >
> > Which specific callbacks are this? sched_core_balance()?
> >
> > In general, shooting down all callbacks like this makes me feel somewhat
> > uncomfortable.
>
> So, if we originally picked a RT task, I believe it would setup the
> push_rt_tasks callback, but if it got migrated and if we needed to
> pick again, we'd end up tripping on
> `SCHED_WARN_ON(rq->balance_callback && rq->balance_callback !=
> &balance_push_callback);`
>
> For a while I tried to unpin and run the balance callbacks before
> calling pick_again, if find_proxy_task() failed, but that was running
> into troubles with tasks getting unintentionally added to the rt
> pushable list (this was back in ~feb, so my memory is a little fuzzy).
>
> So that's when I figured zaping the callbacks would be best, with the
> idea being that we are starting selection over, so we effectively have
> to undo any of the state that was set by pick_next_task() before
> calling it again.
>
> Let me know if you have concerns with this, or suggestions for other approaches.
For now, lets stick a coherent comment on, explaining exactly which
callbacks and why.
> > > +/*
> > > + * Initial simple proxy that just returns the task if it's waking
> > > + * or deactivates the blocked task so we can pick something that
> > > + * isn't blocked.
> > > + */
> > > +static struct task_struct *
> > > +find_proxy_task(struct rq *rq, struct task_struct *donor, struct rq_flags *rf)
> > > +{
> > > + struct task_struct *p = donor;
> > > + struct mutex *mutex;
> > > +
> > > + mutex = p->blocked_on;
> > > + /* Something changed in the chain, so pick again */
> > > + if (!mutex)
> > > + return NULL;
> > > + /*
> > > + * By taking mutex->wait_lock we hold off concurrent mutex_unlock()
> > > + * and ensure @owner sticks around.
> > > + */
> > > + raw_spin_lock(&mutex->wait_lock);
> > > + raw_spin_lock(&p->blocked_lock);
> >
> > I'm still wondering what this blocked_lock does, that previous patch had
> > it mirror wait_mutex too, so far I don't see the point.
>
> Yeah, early on in the series it's maybe not as useful, but as we start
> dealing with sleeping owner enqueuing, its doing more:
> https://github.com/johnstultz-work/linux-dev/commit/d594ca8df88645aa3b2b9daa105664893818bdb7
>
> But it is possible it is more of a crutch for me to keep straight the
> locking rules as it's simpler to keep in my head. :)
> Happy to think a bit more on if it can be folded together with another lock.
I'm a big believer in only introducing state when we actually need it --
and I don't believe we actually need blocked_lock until we go SMP.
Anyway, I have since figured out the why of blocked_lock again; but
yeah, comments, because I'm sure to forget it again at some point.