Re: [PATCH] sched/fair: Again ignore percpu threads for imbalance pulls
From: Valentin Schneider
Date: Tue Jan 18 2022 - 12:11:09 EST
On 18/01/22 16:11, Yihao Wu wrote:
> On 2022/1/18 1:16 am, Valentin Schneider wrote:
>> On 17/01/22 22:50, Yihao Wu wrote:
>>> wakeup balance keeps doing this until another NUMA node becomes so busy.
>>> And another periodic load balance just shifts it around, makeing the
>>> previously overloaded node completely idle now.
>>>
>>
>> Oooh, right, I came to the same conclusion when I got that stress-ng
>> regression report back then:
>>
>> https://lore.kernel.org/all/871rajkfkn.mognet@xxxxxxx/
>>
>
> Shocked! I wasted weeks to locate almost the same regression. Why on
> earth haven't I read your discussion of half a year ago?
>
I've been there too :) It's a tricky thing, you have to at least do a
bisection to find some commit, and then look up the ML if there's been any
further discussion / report on it...
>> I pretty much gave up on that as the regression we caused by removing an
>> obscure/accidental balance which I couldn't properly codify. I can give it
>
> Strange, the regression reported to me says differently from yours.
>
> 4.19.91 before_2f5f4 after_2f5f4
> my_report good bad bad
> your_report N/A good bad
>
> your_report says 2f5f4 introduces new regression. While
> my_report says 2f5f4 fails and leaves the old regression be ...
>
> Maybe that's the reason why you give up on fixing it, yet I came to make
> can_migrate_task cover more cases (kernel_thread).
>
Huh; 2f5f4cce496e is actually a 5.10-stable backport of 9bcb959d05ee; what
was the first bad commit for you?
>
>> another shot, but AFAICT that only affects fork/exec heavy workloads (that
>> -13% was on something doing almost only forks) which is an odd case to
>> support.
>>
> Yes. They're indeed quite odd workloads.
> - Apps with massive shortlived threads better change runtime model, or
> use a thread pool.
> - Massive different apps on the same machine are even odder.
>
> But I guess this problem affects normal workloads too, more or less but
> not significantly. Hard to tell exactly how much influence it has.
>
Looking at my notes for the regression on that particular machine for that particular
benchmark, the group_imbalanced logic triggers for ~1% of the forks, and
the avg task lifespan was 6µs. IMO that's pretty extreme, fork-time balance
becomes the only available balance point for the child tasks (IIRC
benchmark has N stressors forking one child each) - as you said above a
more realistic approach here should use a thread pool of some sort.