Re: [PATCH 1/2] sched/fair: Prefer fully-idle SMT cores in asym-capacity idle selection
From: Andrea Righi
Date: Sat Apr 18 2026 - 04:24:47 EST
Hi Dietmar,
On Tue, Apr 07, 2026 at 01:21:16PM +0200, Dietmar Eggemann wrote:
>
>
> On 03.04.26 07:31, Andrea Righi wrote:
> > On systems with asymmetric CPU capacity (e.g., ACPI/CPPC reporting
> > different per-core frequencies), the wakeup path uses
> > select_idle_capacity() and prioritizes idle CPUs with higher capacity
> > for better task placement.
> >
> > However, when those CPUs belong to SMT cores, their effective capacity
> > can be much lower than the nominal capacity when the sibling thread is
> > busy: SMT siblings compete for shared resources, so a "high capacity"
> > CPU that is idle but whose sibling is busy does not deliver its full
> > capacity. This effective capacity reduction cannot be modeled by the
> > static capacity value alone.
> >
> > When SMT is active, teach asym-capacity idle selection to treat a
> > logical CPU as a weaker target if its physical core is only partially
> > idle: select_idle_capacity() no longer returns on the first idle CPU
> > whose static capacity fits the task when that CPU still has a busy
> > sibling, it keeps scanning for an idle CPU on a fully-idle core and only
> > if none qualify does it fall back to partially-idle cores, using shifted
> > fit scores so fully-idle cores win ties; asym_fits_cpu() applies the
> > same fully-idle core requirement when asym capacity and SMT are both
> > active.
> >
> > This improves task placement, since partially-idle SMT siblings deliver
> > less than their nominal capacity. Favoring fully idle cores, when
> > available, can significantly enhance both throughput and wakeup latency
> > on systems with both SMT and CPU asymmetry.
> >
> > No functional changes on systems with only asymmetric CPUs or only SMT.
> >
> > Cc: K Prateek Nayak <kprateek.nayak@xxxxxxx>
> > Cc: Vincent Guittot <vincent.guittot@xxxxxxxxxx>
> > Cc: Dietmar Eggemann <dietmar.eggemann@xxxxxxx>
> > Cc: Christian Loehle <christian.loehle@xxxxxxx>
> > Cc: Koba Ko <kobak@xxxxxxxxxx>
> > Reported-by: Felix Abecassis <fabecassis@xxxxxxxxxx>
> > Signed-off-by: Andrea Righi <arighi@xxxxxxxxxx>
> > ---
> > kernel/sched/fair.c | 36 ++++++++++++++++++++++++++++++++----
> > 1 file changed, 32 insertions(+), 4 deletions(-)
> >
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index bf948db905ed1..7f09191014d18 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -7774,6 +7774,7 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, bool
> > static int
> > select_idle_capacity(struct task_struct *p, struct sched_domain *sd, int target)
> > {
> > + bool prefers_idle_core = sched_smt_active() && test_idle_cores(target);
>
> Somehow I miss a:
>
> if (prefers_idle_core)
> set_idle_cores(target, false)
>
> The one in select_idle_sibling() -> select_idle_cpu() isn't executed
> anymore in with ASYM_CPUCAPACITY.
>
Right, we need to add this as also pointed by Vincent.
>
> Another thing is that sic() iterates over CPUs sd_asym_cpucapacity
> whereas the idle core thing lives in sd_llc/sd_llc_shared. Both sd's are
> probably th same on your system.
Hm... they're the same on my machine, but if they're different, clearing
has_idle_cores here is not right and it might lead to false positives. We should
only clear it only when both domains span the same CPUs (or just check if
sd_asym_cpucapacity and sd_llc are the same).
However, if they're not the same, I'm not sure exactly what we should do...
maybe ignore has_idle_cores and always do the scan for now?
Thanks,
-Andrea