Re: [REGRESSION][v6.17-rc1]sched/fair: Bump sd->max_newidle_lb_cost when newidle balance fails
From: Hazem Mohamed Abuelfotoh
Date: Fri Oct 10 2025 - 09:10:17 EST
>> Hi Chris,
>>
>> During testing, we are seeing a ~6% performance regression with the
>> upstream stable v6.12.43 kernel (And Oracle UEK
>> 6.12.0-104.43.4.el9uek.x86_64 kernel) when running the Phoronix
>> pts/apache benchmark with 100 concurrent requests [0]. The regression
>> is seen with the following hardware:
>>
>> PROCESSOR: Intel Xeon Platinum 8167M Core Count: 8 Thread Count: 16
>> Extensions: SSE 4.2 + AVX512CD + AVX2 + AVX + RDRAND + FSGSBASE Cache
>> Size: 16 MB Microcode: 0x1 Core Family: Cascade Lake
>>
>> After performing a bisect, we found that the performance regression was
>> introduced by the following commit:
>>
>> Stable v6.12.43: fc4289233e4b ("sched/fair: Bump sd->max_newidle_lb_cost
>> when newidle balance fails")
>> Mainline v6.17-rc1: 155213a2aed4 ("sched/fair: Bump
>> sd->max_newidle_lb_cost when newidle balance fails")
>>
>> Reverting this commit causes the performance regression to not exist.
>>
>> I was hoping to get your feedback, since you are the patch author. Do
>> you think gathering any additional data will help diagnose this issue?
> Hi everyone,
> Peter, we've had a collection of regression reports based on this
> change, so it sounds like we need to make it less aggressive, or maybe
> we need to make the degrading of the cost number more aggressive?
> Joe (and everyone else who has hit this), can I talk you into trying the
> drgn from
> https://lore.kernel.org/lkml/2fbf24bc-e895-40de-9ff6-5c18b74b4300@xxxxxxxx/
> I'm curious if it degrades at all or just gets stuck up high.
Hi All,
We are also seeing 20-30% performance regression on Database workloads
specifically Cassandra & Mongodb across multiple hardware platforms. We
have seen the regression on v6.1.149 & v6.12.43 and we were able to
bisect the regression to 155213a2aed4 ("sched/fair: Bump
> sd->max_newidle_lb_cost when newidle balance fails").
We were able to reproduce this regression on below AWS instance types:
- c7a.4xlarge (16 4th generation AMD EPYC processors + 32 GiB RAM)
- c7i.4xlarge (16 4th Generation Intel Xeon Scalable processors + 32GiB RAM)
- c7g.4xlarge (16 AWS Arm based Graviton3 processors + 32 GiB RAM)
- c8g.4xlarge (16 AWS Arm based Graviton4 processors + 32 GiB RAM)
We will try drgn from
https://lore.kernel.org/lkml/2fbf24bc-e895-40de-9ff6-5c18b74b4300@xxxxxxxx/
and will let you know the results. Meanwhile and given the significant
impact, Should we revert this commit on latest mainline & on impacted stable
branches to stop the bleeding until we have a permanent fix?
Thank you.
Hazem