Re: [PATCH v3 04/21] sched/cache: Make LLC id continuous
From: Tim Chen
Date: Thu Feb 19 2026 - 16:04:42 EST
On Thu, 2026-02-19 at 11:20 -0800, Tim Chen wrote:
> On Thu, 2026-02-19 at 23:20 +0800, Chen, Yu C wrote:
> > On 2/19/2026 10:59 PM, Peter Zijlstra wrote:
> > > On Tue, Feb 10, 2026 at 02:18:44PM -0800, Tim Chen wrote:
> > >
> > > > diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
> > > > index cf643a5ddedd..ca46b5cf7f78 100644
> > > > --- a/kernel/sched/topology.c
> > > > +++ b/kernel/sched/topology.c
> > > > @@ -20,6 +20,7 @@ void sched_domains_mutex_unlock(void)
> > > > /* Protected by sched_domains_mutex: */
> > > > static cpumask_var_t sched_domains_tmpmask;
> > > > static cpumask_var_t sched_domains_tmpmask2;
> > > > +static int tl_max_llcs;
> > > >
> > > > static int __init sched_debug_setup(char *str)
> > > > {
> > > > @@ -658,7 +659,7 @@ static void destroy_sched_domains(struct sched_domain *sd)
> > > > */
> > > > DEFINE_PER_CPU(struct sched_domain __rcu *, sd_llc);
> > > > DEFINE_PER_CPU(int, sd_llc_size);
> > > > -DEFINE_PER_CPU(int, sd_llc_id);
> > > > +DEFINE_PER_CPU(int, sd_llc_id) = -1;
> > > > DEFINE_PER_CPU(int, sd_share_id);
> > > > DEFINE_PER_CPU(struct sched_domain_shared __rcu *, sd_llc_shared);
> > > > DEFINE_PER_CPU(struct sched_domain __rcu *, sd_numa);
> > > > @@ -684,7 +685,6 @@ static void update_top_cache_domain(int cpu)
> > > >
> > > > rcu_assign_pointer(per_cpu(sd_llc, cpu), sd);
> > > > per_cpu(sd_llc_size, cpu) = size;
> > > > - per_cpu(sd_llc_id, cpu) = id;
> > > > rcu_assign_pointer(per_cpu(sd_llc_shared, cpu), sds);
> > > >
> > > > sd = lowest_flag_domain(cpu, SD_CLUSTER);
> > > > @@ -2567,10 +2567,18 @@ build_sched_domains(const struct cpumask *cpu_map, struct sched_domain_attr *att
> > > >
> > > > /* Set up domains for CPUs specified by the cpu_map: */
> > > > for_each_cpu(i, cpu_map) {
> > > > - struct sched_domain_topology_level *tl;
> > > > + struct sched_domain_topology_level *tl, *tl_llc = NULL;
> > > > + int lid;
> > > >
> > > > sd = NULL;
> > > > for_each_sd_topology(tl) {
> > > > + int flags = 0;
> > > > +
> > > > + if (tl->sd_flags)
> > > > + flags = (*tl->sd_flags)();
> > > > +
> > > > + if (flags & SD_SHARE_LLC)
> > > > + tl_llc = tl;
> > > >
> > > > sd = build_sched_domain(tl, cpu_map, attr, sd, i);
> > > >
> > > > @@ -2581,6 +2589,39 @@ build_sched_domains(const struct cpumask *cpu_map, struct sched_domain_attr *att
> > > > if (cpumask_equal(cpu_map, sched_domain_span(sd)))
> > > > break;
> > > > }
> > > > +
> > > > + lid = per_cpu(sd_llc_id, i);
> > > > + if (lid == -1) {
> > > > + int j;
> > > > +
> > > > + /*
> > > > + * Assign the llc_id to the CPUs that do not
> > > > + * have an LLC.
> > > > + */
> > >
> > > Where does this happen? Is this for things like Atom that don't have an
> > > L3 and so we don't set up a LLC domain?
> > >
> >
> > Yes, for some hybrid platforms, some CPUs on that platforms might not
> > have L3,
> > Tim might correct me if I’m wrong. Above code is derived from the
> > update_top_cache_domain(),
> > if there is no sched domain with SD_SHARE_LLC, per_cpu(sd_llc_id, cpu)
> > is set to the
> > CPU number directly.
> >
>
> That's correct. One example is Meteor Lake where some Atom CPUs don't have
> L3 but have only L2. And some Ampere CPUs also have no shared L3.
>
> https://www.spinics.net/lists/kernel/msg5863118.html?utm_source=chatgpt.com
>
> This also reminded me that if we rely on cpu_coregroup_mask for LLC id
> assignment, we may be missing out such platforms which need to treat
> L2 as the last level cache. So we may need to fallback to cpu_clustergroup_mask
> or cpu_smt_mask where applicable.
On further inspection of the code, cpu_coregroup_mask will just be the same
as cpu_clustergroup_mask for that case so we should be okay.
Tim
>
> Tim
>
> > thanks,
> > Chenyu
> >