Re: [PATCH v3 13/21] sched/cache: Handle moving single tasks to/from their preferred LLC

From: Tim Chen

Date: Tue Feb 17 2026 - 17:04:33 EST


On Wed, 2026-02-18 at 00:30 +0530, Madadi Vineeth Reddy wrote:
> On 11/02/26 03:48, Tim Chen wrote:
> > In the generic load balance(non-cache-aware-load-balance),
> > if the busiest runqueue has only one task, active balancing may be
> > invoked to move it. However, this migration might break LLC locality.
> >
> > Before migration, check whether the task is running on its preferred
> > LLC: Do not move a lone task to another LLC if it would move the task
> > away from its preferred LLC or cause excessive imbalance between LLCs.
> >
> > On the other hand, if the migration type is migrate_llc_task, it means
> > that there are tasks on the env->src_cpu that want to be migrated to
> > their preferred LLC, launch the active load balance anyway.
>
> Nit:
> But the check of migrate_llc_task is made after checking alb_break_llc
> which seems to be contradicting. I understand that this check
>
> env->src_rq->nr_pref_llc_running == env->src_rq->cfs.h_nr_runnable
>
> prevents alb_break_llc to return true when migrate_llc_task exists. However,
> checking migrate_llc_task first would make the priority and intent more
> explicit.

We have actually considered that.

Suppose we do migrate_llc_task check first, we still have to check that
this migration will not cause load balance to go beyond the load
imbalance we allow with can_migrate_llc(), i.e. do the same
check as in alb_break_llc().  

Then for other kinds of task migration, we also need to check
that those migrations don't break LLC policy with alb_break_llc().

So it is better to just do the alb_break_llc() check first to
cover all migration types.

It doesn't really help to save any code by moving migrate_llc_task
check up.

Tim

>
> Thanks,
> Vineeth
>
> >
> > Co-developed-by: Chen Yu <yu.c.chen@xxxxxxxxx>
> > Signed-off-by: Chen Yu <yu.c.chen@xxxxxxxxx>
> > Signed-off-by: Tim Chen <tim.c.chen@xxxxxxxxxxxxxxx>
> > ---
> >
> > Notes:
> > v2->v3:
> > Remove redundant rcu read lock in break_llc_locality().
> >
> > kernel/sched/fair.c | 54 ++++++++++++++++++++++++++++++++++++++++++++-
> > 1 file changed, 53 insertions(+), 1 deletion(-)
> >
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index 1697791ef11c..03959a701514 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -9999,12 +9999,60 @@ static __maybe_unused enum llc_mig can_migrate_llc_task(int src_cpu, int dst_cpu
> > task_util(p), to_pref);
> > }
> >
> > +/*
> > + * Check if active load balance breaks LLC locality in
> > + * terms of cache aware load balance.
> > + */
> > +static inline bool
> > +alb_break_llc(struct lb_env *env)
> > +{
> > + if (!sched_cache_enabled())
> > + return false;
> > +
> > + if (cpus_share_cache(env->src_cpu, env->dst_cpu))
> > + return false;
> > + /*
> > + * All tasks prefer to stay on their current CPU.
> > + * Do not pull a task from its preferred CPU if:
> > + * 1. It is the only task running there; OR
> > + * 2. Migrating it away from its preferred LLC would violate
> > + * the cache-aware scheduling policy.
> > + */
> > + if (env->src_rq->nr_pref_llc_running &&
> > + env->src_rq->nr_pref_llc_running == env->src_rq->cfs.h_nr_runnable) {
> > + unsigned long util = 0;
> > + struct task_struct *cur;
> > +
> > + if (env->src_rq->nr_running <= 1)
> > + return true;
> > +
> > + /*
> > + * Reach here in load balance with
> > + * rcu_read_lock() protected.
> > + */
> > + cur = rcu_dereference(env->src_rq->curr);
> > + if (cur)
> > + util = task_util(cur);
> > +
> > + if (can_migrate_llc(env->src_cpu, env->dst_cpu,
> > + util, false) == mig_forbid)
> > + return true;
> > + }
> > +
> > + return false;
> > +}
> > #else
> > static inline bool get_llc_stats(int cpu, unsigned long *util,
> > unsigned long *cap)
> > {
> > return false;
> > }
> > +
> > +static inline bool
> > +alb_break_llc(struct lb_env *env)
> > +{
> > + return false;
> > +}
> > #endif
> > /*
> > * can_migrate_task - may task p from runqueue rq be migrated to this_cpu?
> > @@ -12421,6 +12469,9 @@ static int need_active_balance(struct lb_env *env)
> > {
> > struct sched_domain *sd = env->sd;
> >
> > + if (alb_break_llc(env))
> > + return 0;
> > +
> > if (asym_active_balance(env))
> > return 1;
> >
> > @@ -12440,7 +12491,8 @@ static int need_active_balance(struct lb_env *env)
> > return 1;
> > }
> >
> > - if (env->migration_type == migrate_misfit)
> > + if (env->migration_type == migrate_misfit ||
> > + env->migration_type == migrate_llc_task)
> > return 1;
> >
> > return 0;
>