Re: [PATCH 0/4] sched/fair: SMT-aware asymmetric CPU capacity

From: Andrea Righi

Date: Tue Mar 31 2026 - 05:09:30 EST


Hi Dietmar,

On Tue, Mar 31, 2026 at 12:30:55AM +0200, Dietmar Eggemann wrote:
> Hi Andrea,
>
> On 26.03.26 16:02, Andrea Righi wrote:
>
> [...]
>
> > This patch set has been tested on the new NVIDIA Vera Rubin platform, where
> > SMT is enabled and the firmware exposes small frequency variations (+/-~5%)
> > as differences in CPU capacity, resulting in SD_ASYM_CPUCAPACITY being set.
> >
> > Without these patches, performance can drop up to ~2x with CPU-intensive
> > workloads, because the SD_ASYM_CPUCAPACITY idle selection policy does not
> > account for busy SMT siblings.
> >
> > Alternative approaches have been evaluated, such as equalizing CPU
> > capacities, either by exposing uniform values via firmware (ACPI/CPPC) or
> > normalizing them in the kernel by grouping CPUs within a small capacity
> > window (+-5%) [1][2], or enabling asympacking [3].
> >
> > However, adding SMT awareness to SD_ASYM_CPUCAPACITY has shown better
> > results so far. Improving this policy also seems worthwhile in general, as
> > other platforms in the future may enable SMT with asymmetric CPU
> > topologies.
> I still wonder whether we really need select_idle_capacity() (plus the
> smt part) for asymmetric CPU capacity systems where the CPU capacity
> differences are < 5% of SCHED_CAPACITY_SCALE.
>
> The known example would be the NVIDIA Grace (!smt) server with its
> slightly different perf_caps.highest_perf values.
>
> We did run DCPerf Mediawiki on this thing with:
>
> (1) ASYM_CPUCAPACITY (default)
>
> (2) NO ASYM_CPUCAPACITY
>
> We also ran on a comparable ARM64 server (!smt) for comparison:
>
> (1) ASYM_CPUCAPACITY
>
> (2) NO ASYM_CPUCAPACITY (default)
>
> Both systems have 72 CPUs, run v6.8 and have a single MC sched domain
> with LLC spanning over all 72 CPUs. During the tests there were ~750
> tasks among them the workload related:
>
> #hhvmworker 147
> #mariadbd 204
> #memcached 11
> #nginx 8
> #wrk 144
> #ProxygenWorker 1
>
> load_balance:
>
> not_idle 3x more on (2)
>
> idle 2x more on (2)
>
> newly_idle 2-10x more on (2)
>
> wakeup:
>
> move_affine 2-3x more on (1)
>
> ttwu_local 1.5-2 more on (2)
>
> We also instrumented all the bailout conditions in select_task_sibling()
> (sis())->select_idle_cpu() and select_idle_capacity() (sic()).
>
> In (1) almost all wakeups end up in select_idle_cpu() returning -1 due
> to the fact that 'sd->shared->nr_idle_scan' under SIS_UTIL is 0. So
> sis() in (1) almost always returns target (this_cpu or prev_cpu). sic()
> doesn't do this.
>
> What I haven't done is to try (1) with SIS_UTIL or (2) with NO_SIS_UTIL.
>
> I wonder whether this is the underlying reason for the benefit of (1)
> over (2) we see here with smt now?
>
> So IMHO before adding smt support to (1) for these small CPPC based CPU
> capacity differences we should make sure that the same can't be achieved
> by disabling SIS_UTIL or to soften it a bit.
>
> So does (2) with NO_SIS_UTIL performs worse than (1) with your smt
> related add-ons in sic()?

Thanks for running these experiments and sharing the data, this is very
useful!

I did a quick test on Vera using the NVBLAS benchmark, comparing NO
ASYM_CPUCAPACITY with and without SIS_UTIL, but the difference seems to be
within error range. I'll also run DCPerf MediaWiki with all the different
configurations to see if I get similar results.

More in general, I agree that for small capacity differences (e.g., within
~5%) the benefits of using ASYM_CPUCAPACITY is questionable. And I'm also
fine to go back to the idea of grouping together CPUS within the 5%
capacity window, if we think it's a safer approach (results in your case
are quite evident, and BTW, that means we also shouldn't have
ASYM_CPU_CAPACITY on Grace, so in theory the 5% threshold should also
improve performance on Grace, that doesn't have SMT).

That said, I still think there's value in adding SMT awareness to
select_idle_capacity(). Even if we decide to avoid ASYM_CPUCAPACITY for
small capacity deltas, we should ensure that the behavior remains
reasonable if both features are enabled, for any reason. Right now, there
are cases where the current behavior leads to significant performance
degradation (~2x), so having a mechanism to prevent clearly suboptimal task
placement still seems worthwhile. Essentially, what I'm saying is that one
thing doesn't exclude the other.

Thanks,
-Andrea