Re: [PATCH 2/2 v3] sched: use load_avg for selecting idlest group

From: Matt Fleming
Date: Fri Dec 09 2016 - 08:22:21 EST


On Thu, 08 Dec, at 05:56:54PM, Vincent Guittot wrote:
> find_idlest_group() only compares the runnable_load_avg when looking for
> the least loaded group. But on fork intensive use case like hackbench
> where tasks blocked quickly after the fork, this can lead to selecting the
> same CPU instead of other CPUs, which have similar runnable load but a
> lower load_avg.
>
> When the runnable_load_avg of 2 CPUs are close, we now take into account
> the amount of blocked load as a 2nd selection factor. There is now 3 zones
> for the runnable_load of the rq:
> -[0 .. (runnable_load - imbalance)] : Select the new rq which has
> significantly less runnable_load
> -](runnable_load - imbalance) .. (runnable_load + imbalance)[ : The
> runnable loads are close so we use load_avg to chose between the 2 rq
> -[(runnable_load + imbalance) .. ULONG_MAX] : Keep the current rq which
> has significantly less runnable_load
>
> The scale factor that is currently used for comparing runnable_load,
> doesn't work well with small value. As an example, the use of a scaling
> factor fails as soon as this_runnable_load == 0 because we always select
> local rq even if min_runnable_load is only 1, which doesn't really make
> sense because they are just the same. So instead of scaling factor, we use
> an absolute margin for runnable_load to detect CPUs with similar
> runnable_load and we keep using scaling factor for blocked load.
>
> For use case like hackbench, this enable the scheduler to select different
> CPUs during the fork sequence and to spread tasks across the system.
>
> Tests have been done on a Hikey board (ARM based octo cores) for several
> kernel. The result below gives min, max, avg and stdev values of 18 runs
> with each configuration.
>
> The v4.8+patches configuration also includes the changes below which is
> part of the proposal made by Peter to ensure that the clock will be up to
> date when the fork task will be attached to the rq.
>
> @@ -2568,6 +2568,7 @@ void wake_up_new_task(struct task_struct *p)
> __set_task_cpu(p, select_task_rq(p, task_cpu(p), SD_BALANCE_FORK, 0));
> #endif
> rq = __task_rq_lock(p, &rf);
> + update_rq_clock(rq);
> post_init_entity_util_avg(&p->se);
>
> activate_task(rq, p, 0);
>
> hackbench -P -g 1
>
> ea86cb4b7621 7dc603c9028e v4.8 v4.8+patches
> min 0.049 0.050 0.051 0,048
> avg 0.057 0.057(0%) 0.057(0%) 0,055(+5%)
> max 0.066 0.068 0.070 0,063
> stdev +/-9% +/-9% +/-8% +/-9%
>
> Signed-off-by: Vincent Guittot <vincent.guittot@xxxxxxxxxx>
> ---
> kernel/sched/fair.c | 48 ++++++++++++++++++++++++++++++++++++++----------
> 1 file changed, 38 insertions(+), 10 deletions(-)

Tested-by: Matt Fleming <matt@xxxxxxxxxxxxxxxxxxx>
Reviewed-by: Matt Fleming <matt@xxxxxxxxxxxxxxxxxxx>

Peter, Ingo, when you pick this up would you also consider adding the
following tag which links to an email describing the problem this
patch solves and the performance test results when it's applied?

Link: https://lkml.kernel.org/r/20161203214707.GI20785@xxxxxxxxxxxxxxxxxxx