Re: [PATCH v2] schied/fair: Skip calculating @contrib without load
From: Vincent Guittot
Date: Fri Dec 13 2019 - 04:22:09 EST
Hi Peng,
minor typo on the subject s/schied/sched/
On Fri, 13 Dec 2019 at 04:46, Peng Wang <rocking@xxxxxxxxxxxxxxxxx> wrote:
>
> Because of the:
>
> if (!load)
> runnable = running = 0;
>
> clause in ___update_load_sum(), all the actual users of @contrib in
> accumulate_sum():
>
> if (load)
> sa->load_sum += load * contrib;
> if (runnable)
> sa->runnable_load_sum += runnable * contrib;
> if (running)
> sa->util_sum += contrib << SCHED_CAPACITY_SHIFT;
>
> don't happen, and therefore we don't care what @contrib actually is and
> calculating it is pointless.
>
> If we count the times when @load equals zero and not as below:
>
> if (load) {
> load_is_not_zero_count++;
> contrib = __accumulate_pelt_segments(periods,
> 1024 - sa->period_contrib,delta);
> } else
> load_is_zero_count++;
>
> As we can see, load_is_zero_count is much bigger than
> load_is_zero_count, and the gap is gradually widening:
>
> load_is_zero_count: 6016044 times
> load_is_not_zero_count: 244316 times
> 19:50:43 up 1 min, 1 user, load average: 0.09, 0.06, 0.02
>
> load_is_zero_count: 7956168 times
> load_is_not_zero_count: 261472 times
> 19:51:42 up 2 min, 1 user, load average: 0.03, 0.05, 0.01
>
> load_is_zero_count: 10199896 times
> load_is_not_zero_count: 278364 times
> 19:52:51 up 3 min, 1 user, load average: 0.06, 0.05, 0.01
>
> load_is_zero_count: 14333700 times
> load_is_not_zero_count: 318424 times
> 19:54:53 up 5 min, 1 user, load average: 0.01, 0.03, 0.00
your system looks pretty idle so i'm not sure to see a benefit of your
patch in such situation
>
> Perhaps we can gain some performance advantage by saving these
> unnecessary calculation.
load == 0 when
- system is idle and we updates blocked load but we don't really care
about performance in this case and we will stop the trigger the update
once the load_avg reach null value
- a rt/dl/cfs rq or a sched_entity wakes up. In this case, skipping
the calculation of the contribution should fasten the wake up path
although i'm not sure about the amount
Nevertheless, this change makes sense
Reviewed-by: Vincent Guittot < vincent.guittot@xxxxxxxxxx>
>
> Signed-off-by: Peng Wang <rocking@xxxxxxxxxxxxxxxxx>
> ---
> kernel/sched/pelt.c | 5 +++--
> 1 file changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/kernel/sched/pelt.c b/kernel/sched/pelt.c
> index a96db50..4392953 100644
> --- a/kernel/sched/pelt.c
> +++ b/kernel/sched/pelt.c
> @@ -129,8 +129,9 @@ static u32 __accumulate_pelt_segments(u64 periods, u32 d1, u32 d3)
> * Step 2
> */
> delta %= 1024;
> - contrib = __accumulate_pelt_segments(periods,
> - 1024 - sa->period_contrib, delta);
> + if (load)
> + contrib = __accumulate_pelt_segments(periods,
> + 1024 - sa->period_contrib, delta);
> }
> sa->period_contrib = delta;
>
> --
> 1.8.3.1
>