Re: [PATCH v2 3/4] sched/rt: Split root_domain->rto_count to per-NUMA-node counters
From: Tim Chen
Date: Tue Mar 24 2026 - 18:45:32 EST
On Tue, 2026-03-24 at 13:16 +0100, Peter Zijlstra wrote:
> On Mon, Mar 23, 2026 at 11:09:24AM -0700, Tim Chen wrote:
> > On Fri, 2026-03-20 at 11:24 +0100, Peter Zijlstra wrote:
> > > On Mon, Jul 21, 2025 at 02:10:25PM +0800, Pan Deng wrote:
> > > > As a complementary, this patch splits
> > > > `rto_count` into per-numa-node counters to reduce the contention.
> > >
> > > Right... so Tim, didn't we have similar patches for task_group::load_avg
> > > or something like that? Whatever did happen there? Can we share common
> > > infra?
> >
> > We did talk about introducing per NUMA counter for load_avg. We went with
> > limiting the update rate of load_avg to not more than once per msec
> > in commit 1528c661c24b4 to control the cache bounce.
> >
> > >
> > > Also since Tim is sitting on this LLC infrastructure, can you compare
> > > per-node and per-llc for this stuff? Somehow I'm thinking that a 2
> > > socket 480 CPU system only has like 2 nodes and while splitting this
> > > will help some, that might not be excellent.
> >
> > You mean enhancing the per NUMA counter to per LLC? I think that makes
> > sense to reduce the LLC cache bounce if there are multiple LLCs per
> > NUMA node.
>
> Does that system have multiple LLCs? Realistically, it would probably
> improve things if we could split these giant stupid LLCs along the same
> lines SNC does.
The system that Pan tested does not have multiple LLCs per node. But
future Intel systems and current AMD systems do. So it make sense
to start thinking about having a per LLC count infrastructure.
We could create a per LLC counter library, kind of like the percpu counter
we already have. We can leverage compact LLC id assignment in the cache aware scheduling
patches to allocate arrays indexed by LLC id. The caveat is if such LLC
count is used during early boot before LLCs are enumerated in the topology code, we may need to
put do accounting in a global count, till the per LLC count gets enumerated
and we know the right size of the LLC array. And we'll also need to
check if new LLC come online or offline and handle things accordingly.
That sounds reasonable?
Tim
>
> I still have the below terrible hack that I've been using to diagnose
> and test all these multi-llc patches/regressions etc. Funnily enough its
> been good enough to actually show some of the issues.
>
>
>
> ---
> Subject: x86/topology: Add paramter to split LLC
> From: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
> Date: Thu Feb 19 12:11:16 CET 2026
>
> Add a (debug) option to virtually split the LLC, no CAT involved, just fake
> topology. Used to test code that depends (either in behaviour or directly) on
> there being multiple LLC domains in a node.
>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx>
> ---
> Documentation/admin-guide/kernel-parameters.txt | 12 ++++++++++++
> arch/x86/include/asm/processor.h | 5 +++++
> arch/x86/kernel/smpboot.c | 20 ++++++++++++++++++++
> 3 files changed, 37 insertions(+)
>
> --- a/Documentation/admin-guide/kernel-parameters.txt
> +++ b/Documentation/admin-guide/kernel-parameters.txt
> @@ -7241,6 +7241,18 @@ Kernel parameters
> Not specifying this option is equivalent to
> spec_store_bypass_disable=auto.
>
> + split_llc=
> + [X86,EARLY] Split the LLC N-ways
> +
> + When set, the LLC is split this many ways by matching
> + 'core_id % n'. This is setup before SMP bringup and
> + used during SMP bringup before it knows the full
> + topology. If your core count doesn't nicely divide by
> + the number given, you get to keep the pieces.
> +
> + This is mostly a debug feature to emulate multiple LLCs
> + on hardware that only have a single LLC.
> +
> split_lock_detect=
> [X86] Enable split lock detection or bus lock detection
>
> --- a/arch/x86/include/asm/processor.h
> +++ b/arch/x86/include/asm/processor.h
> @@ -699,6 +699,11 @@ static inline u32 per_cpu_l2c_id(unsigne
> return per_cpu(cpu_info.topo.l2c_id, cpu);
> }
>
> +static inline u32 per_cpu_core_id(unsigned int cpu)
> +{
> + return per_cpu(cpu_info.topo.core_id, cpu);
> +}
> +
> #ifdef CONFIG_CPU_SUP_AMD
> /*
> * Issue a DIV 0/1 insn to clear any division data from previous DIV
> --- a/arch/x86/kernel/smpboot.c
> +++ b/arch/x86/kernel/smpboot.c
> @@ -424,6 +424,21 @@ static const struct x86_cpu_id intel_cod
> {}
> };
>
> +/*
> + * Allows splitting the LLC by matching 'core_id % split_llc'.
> + *
> + * This is mostly a debug hack to emulate systems with multiple LLCs per node
> + * on systems that do not naturally have this.
> + */
> +static unsigned int split_llc = 0;
> +
> +static int __init split_llc_setup(char *str)
> +{
> + get_option(&str, &split_llc);
> + return 0;
> +}
> +early_param("split_llc", split_llc_setup);
> +
> static bool match_llc(struct cpuinfo_x86 *c, struct cpuinfo_x86 *o)
> {
> const struct x86_cpu_id *id = x86_match_cpu(intel_cod_cpu);
> @@ -438,6 +453,11 @@ static bool match_llc(struct cpuinfo_x86
> if (per_cpu_llc_id(cpu1) != per_cpu_llc_id(cpu2))
> return false;
>
> + if (split_llc &&
> + (per_cpu_core_id(cpu1) % split_llc) !=
> + (per_cpu_core_id(cpu2) % split_llc))
> + return false;
> +
> /*
> * Allow the SNC topology without warning. Return of false
> * means 'c' does not share the LLC of 'o'. This will be