Re: [RFC PATCH v3] sched/fair: select idle cpu from idle cpumask for task wakeup
From: Li, Aubrey
Date: Wed Nov 11 2020 - 03:38:34 EST
On 2020/11/9 23:54, Valentin Schneider wrote:
>
> On 09/11/20 13:40, Li, Aubrey wrote:
>> On 2020/11/7 5:20, Valentin Schneider wrote:
>>>
>>> On 21/10/20 16:03, Aubrey Li wrote:
>>>> From: Aubrey Li <aubrey.li@xxxxxxxxx>
>>>>
>>>> Added idle cpumask to track idle cpus in sched domain. When a CPU
>>>> enters idle, its corresponding bit in the idle cpumask will be set,
>>>> and when the CPU exits idle, its bit will be cleared.
>>>>
>>>> When a task wakes up to select an idle cpu, scanning idle cpumask
>>>> has low cost than scanning all the cpus in last level cache domain,
>>>> especially when the system is heavily loaded.
>>>>
>>>
>>> FWIW I gave this a spin on my arm64 desktop (Ampere eMAG, 32 core). I get
>>> some barely noticeable (AIUI not statistically significant for bench sched)
>>> changes for 100 iterations of:
>>>
>>> | bench | metric | mean | std | q90 | q99 |
>>> |------------------------------------+----------+--------+---------+--------+--------|
>>> | hackbench --loops 5000 --groups 1 | duration | -1.07% | -2.23% | -0.88% | -0.25% |
>>> | hackbench --loops 5000 --groups 2 | duration | -0.79% | +30.60% | -0.49% | -0.74% |
>>> | hackbench --loops 5000 --groups 4 | duration | -0.54% | +6.99% | -0.21% | -0.12% |
>>> | perf bench sched pipe -T -l 100000 | ops/sec | +1.05% | -2.80% | -0.17% | +0.39% |
>>>
>>> q90 & q99 being the 90th and 99th percentile.
>>>
>>> Base was tip/sched/core at:
>>> d8fcb81f1acf ("sched/fair: Check for idle core in wake_affine")
>>
>> Thanks for the data, Valentin! So does the negative value mean improvement?
>>
>
> For hackbench yes (shorter is better); for perf bench sched no, since the
> metric here is ops/sec so higher is better.
>
> That said, I (use a tool that) run a 2-sample Kolmogorov–Smirnov test
> against the two sample sets (tip/sched/core vs tip/sched/core+patch), and
> the p-value for perf sched bench is quite high (~0.9) which means we can't
> reject that both sample sets come from the same distribution; long story
> short we can't say whether the patch had a noticeable impact for that
> benchmark.
>
>> If so the data looks expected to me. As we set idle cpumask every time we
>> enter idle, but only clear it at the tick frequency, so if the workload
>> is not heavy enough, there could be a lot of idle during two ticks, so idle
>> cpumask is almost equal to sched_domain_span(sd), which makes no difference.
>>
>> But if the system load is heavy enough, CPU has few/no chance to enter idle,
>> then idle cpumask can be cleared during tick, which makes the bit number in
>> sds_idle_cpus(sd->shared) far less than the bit number in sched_domain_span(sd)
>> if llc domain has large count of CPUs.
>>
>
> With hackbench -g 4 that's 160 tasks (against 32 CPUs, all under same LLC),
> although the work done by each task isn't much. I'll try bumping that a
> notch, or increasing the size of the messages.
As long as the system is busy enough and not schedule on idle thread, then
idle cpu mask will shrink tick by tick, and we'll see lower sd->avg_scan_cost.
This version of patch sets idle cpu bit every time it enters idle, so need
heavy load for scheduler to not switch idle thread in.
I personally like the logic in the previous version, because in those versions,
- when cpu enters idle, cpuidle governor returns a flag "stop_tick"
- if tick is stopped, which indicates the CPU is not busy, and can be set
idle in idle cpumask
- otherwise, the CPU is likely going to work very soon, so not set it in
idle cpumask.
But apparently I missed "nohz=off" case in the previous implementation. For
"nohz=off" case I selected to keep original behavior, which didn't content Mel.
Probably I can refine it in the next version.
Do you have any suggestions?
Thanks,
-Aubrey