Re: [PATCH 08/19] sched/fair: Introduce per runqueue task LLC preference counter
From: Tim Chen
Date: Wed Oct 15 2025 - 16:41:44 EST
On Wed, 2025-10-15 at 14:21 +0200, Peter Zijlstra wrote:
> On Sat, Oct 11, 2025 at 11:24:45AM -0700, Tim Chen wrote:
> > Each runqueue is assigned a static array where each element tracks
> > the number of tasks preferring a given LLC, indexed from 0 to
> > NR_LLCS.
> >
> > For example, rq->nr_pref_llc[3] = 2 signifies that there are 2 tasks on
> > this runqueue which prefer to run within LLC3 (indexed from 0 to NR_LLCS
> >
> > The load balancer can use this information to identify busy runqueues
> > and migrate tasks to their preferred LLC domains.
> >
> > Signed-off-by: Tim Chen <tim.c.chen@xxxxxxxxxxxxxxx>
> > ---
> > kernel/sched/fair.c | 35 +++++++++++++++++++++++++++++++++++
> > kernel/sched/sched.h | 1 +
> > 2 files changed, 36 insertions(+)
> >
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index fd315937c0cf..b7a68fe7601b 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -1235,22 +1235,51 @@ static inline int llc_idx(int cpu)
> > return per_cpu(sd_llc_idx, cpu);
> > }
> >
> > +static inline int pref_llc_idx(struct task_struct *p)
> > +{
> > + return llc_idx(p->preferred_llc);
> > +}
> > +
> > static void account_llc_enqueue(struct rq *rq, struct task_struct *p)
> > {
> > + int pref_llc;
> > +
> > if (!sched_cache_enabled())
> > return;
> >
> > rq->nr_llc_running += (p->preferred_llc != -1);
> > rq->nr_pref_llc_running += (p->preferred_llc == task_llc(p));
> > +
> > + if (p->preferred_llc < 0)
> > + return;
> > +
> > + pref_llc = pref_llc_idx(p);
> > + if (pref_llc < 0)
> > + return;
> > +
> > + ++rq->nr_pref_llc[pref_llc];
> > }
> >
> > static void account_llc_dequeue(struct rq *rq, struct task_struct *p)
> > {
> > + int pref_llc;
> > +
> > if (!sched_cache_enabled())
> > return;
> >
> > rq->nr_llc_running -= (p->preferred_llc != -1);
> > rq->nr_pref_llc_running -= (p->preferred_llc == task_llc(p));
> > +
> > + if (p->preferred_llc < 0)
> > + return;
> > +
> > + pref_llc = pref_llc_idx(p);
> > + if (pref_llc < 0)
> > + return;
> > +
> > + /* avoid negative counter */
> > + if (rq->nr_pref_llc[pref_llc] > 0)
> > + --rq->nr_pref_llc[pref_llc];
>
> How!? Also, please use post increment/decrement operators.
Will change the rq->nr_pref_llc[pref_llc] <= 0 to a warning instead,
and update the decrement to post operator.
>
> > }
> >
> > void mm_init_sched(struct mm_struct *mm, struct mm_sched __percpu *_pcpu_sched)
> > @@ -1524,10 +1553,16 @@ void init_sched_mm(struct task_struct *p)
> >
> > void reset_llc_stats(struct rq *rq)
> > {
> > + int i = 0;
> > +
> > if (!sched_cache_enabled())
> > return;
> >
> > rq->nr_llc_running = 0;
> > +
> > + for (i = 0; i < max_llcs; ++i)
> > + rq->nr_pref_llc[i] = 0;
> > +
> > rq->nr_pref_llc_running = 0;
> > }
>
> Still don't understand why this thing exists..
Will remove this or change this to a debug
warning for the case when rq has no fair task.
>
> >
> > diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> > index 3ab64067acc6..b801d32d5fba 100644
> > --- a/kernel/sched/sched.h
> > +++ b/kernel/sched/sched.h
> > @@ -1101,6 +1101,7 @@ struct rq {
> > #ifdef CONFIG_SCHED_CACHE
> > unsigned int nr_pref_llc_running;
> > unsigned int nr_llc_running;
> > + unsigned int nr_pref_llc[NR_LLCS];
>
> Gah, yeah, lets not do this. Just (re)alloc the thing on topology
> changes or something.
Will have to think about how to keep the tasks' preference
consistent with nr_pref_llc with the new array. Perhaps
make it size of NR_CPUS so we will allocate
once and don't have to resize and reallocate it, and
fill it back up with the right data.
Tim