Re: [PATCH v2 05/23] sched/cache: Assign preferred LLC ID to processes
From: Peter Zijlstra
Date: Tue Dec 09 2025 - 07:11:55 EST
On Wed, Dec 03, 2025 at 03:07:24PM -0800, Tim Chen wrote:
> With cache-aware scheduling enabled, each task is assigned a
> preferred LLC ID. This allows quick identification of the LLC domain
> where the task prefers to run, similar to numa_preferred_nid in
> NUMA balancing.
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 0a3918269906..10cec83f65d5 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -1300,6 +1300,7 @@ void account_mm_sched(struct rq *rq, struct task_struct *p, s64 delta_exec)
> struct mm_struct *mm = p->mm;
> struct mm_sched *pcpu_sched;
> unsigned long epoch;
> + int mm_sched_llc = -1;
>
> if (!sched_cache_enabled())
> return;
> @@ -1330,6 +1331,23 @@ void account_mm_sched(struct rq *rq, struct task_struct *p, s64 delta_exec)
> if (mm->mm_sched_cpu != -1)
> mm->mm_sched_cpu = -1;
> }
> +
> + if (mm->mm_sched_cpu != -1) {
> + mm_sched_llc = llc_id(mm->mm_sched_cpu);
> +
> +#ifdef CONFIG_NUMA_BALANCING
> + /*
> + * Don't assign preferred LLC if it
> + * conflicts with NUMA balancing.
> + */
> + if (p->numa_preferred_nid >= 0 &&
> + cpu_to_node(mm->mm_sched_cpu) != p->numa_preferred_nid)
> + mm_sched_llc = -1;
> +#endif
> + }
> +
> + if (p->preferred_llc != mm_sched_llc)
> + p->preferred_llc = mm_sched_llc;
> }
This can of course still happen when sched_setnuma() gets called. I'm
thinking it is not much of an issue because we expect this thing to get
called fairly regularly -- at a higher rate than sched_setnuma() at
least -- and thus the conflict only exists for a short period of time?
If so, that would make for a good comment.
Additionally, we could of course search for the busiest LLC inside the
node, instead of setting -1. Again, that could live as a comment for
future work.