Re: [PATCH] sched/fair: Consider RT/IRQ pressure in capacity_spare_wake
From: Vincent Guittot
Date: Fri Nov 17 2017 - 03:50:30 EST
On 16 November 2017 at 22:53, Joel Fernandes <joelaf@xxxxxxxxxx> wrote:
> Hi Vincent,
>
> Thanks a lot for your reply, and sorry for the late reply. Actually I
> just started paternity leave so that's why the delay. My working hours
Congratulations !
> and completely random at the moment :-)
>
> On Fri, Nov 10, 2017 at 12:29 AM, Vincent Guittot
> <vincent.guittot@xxxxxxxxxx> wrote:
>> On 9 November 2017 at 19:52, Joel Fernandes <joelaf@xxxxxxxxxx> wrote:
>>> capacity_spare_wake in the slow path influences choice of idlest groups,
>>> as we search for groups with maximum spare capacity. In scenarios where
>>> RT pressure is high, a sub optimal group can be chosen and hurt
>>> performance of the task being woken up.
>>>
>>> Several tests with results are included below to show improvements with
>>> this change.
>>>
>>> 1) Hackbench on Pixel 2 Android device (4x4 ARM64 Octa core)
>>
>> "4x4 ARM64 Octa core" is confusing . At least for me, 4x4 means 16 cores :-)
>
> Sure I'll fix it, I meant 4 big and 4 LITTLE CPUs :)
>
>>
>>> ------------------------------------------------------------
>>> Here we have RT activity running on big CPU cluster induced with rt-app,
>>> and running hackbench in parallel. The RT tasks are bound to 4 CPUs on
>>> the big cluster (cpu 4,5,6,7) and have 100ms periodicity with
>>> runtime=20ms sleep=80ms.
>>>
>>> Hackbench shows big benefit (30%) improvement when number of tasks is 8
>>> and 32: Note: data is completion time in seconds (lower is better).
>>> Number of loops for 8 and 16 tasks is 50000, and for 32 tasks its 20000.
>>> +--------+-----+-------+-------------------+---------------------------+
>>> | groups | fds | tasks | Without Patch | With Patch |
>>> +--------+-----+-------+---------+---------+-----------------+---------+
>>> | | | | Mean | Stdev | Mean | Stdev |
>>> | | | +-------------------+-----------------+---------+
>>> | 1 | 8 | 8 | 1.0534 | 0.13722 | 0.7293 (+30.7%) | 0.02653 |
>>> | 2 | 8 | 16 | 1.6219 | 0.16631 | 1.6391 (-1%) | 0.24001 |
>>> | 4 | 8 | 32 | 1.2538 | 0.13086 | 1.1080 (+11.6%) | 0.16201 |
>>> +--------+-----+-------+---------+---------+-----------------+---------+
>>
>> Out of curiosity, do you know why you don't see any improvement for
>> 16 tasks but only for 8 and 32 tasks ?
>
> Yes I'm not fully sure why 16 tasks didn't show that much improvement.
Yes. This is just to make sure that there no unexpected side effect
> I can try to trace it when I can get a chance. Generally for this
> test, with more number of tasks, the improvement is lesser. However
> you're right to point out that the improvement with 32 is > with 16
> for this test.
>
> [..]
>>> kernel/sched/fair.c | 2 +-
>>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>>
>>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>>> index 56f343b8e749..ba9609407cb9 100644
>>> --- a/kernel/sched/fair.c
>>> +++ b/kernel/sched/fair.c
>>> @@ -5724,7 +5724,7 @@ static int cpu_util_wake(int cpu, struct task_struct *p);
>>>
>>> static unsigned long capacity_spare_wake(int cpu, struct task_struct *p)
>>> {
>>> - return capacity_orig_of(cpu) - cpu_util_wake(cpu, p);
>>> + return max_t(long, capacity_of(cpu) - cpu_util_wake(cpu, p), 0);
>>
>> Make sense
>>
>> Reviewed-by: Vincent Guittot <vincent.guittot@xxxxxxxxxx>
>
> Thanks!
>
> - Joel