Re: [RESEND PATCH] sched: consider missed ticks when updating global cpu load

From: Frederic Weisbecker
Date: Sat Sep 26 2015 - 09:15:26 EST


On Fri, Sep 25, 2015 at 05:52:37PM +0900, byungchul.park@xxxxxxx wrote:
> From: Byungchul Park <byungchul.park@xxxxxxx>
>
> hello,
>
> i have already sent this patch about 1 month ago.
> (see https://lkml.org/lkml/2015/8/13/160)
>
> now, i am resending the same patch with adding some additional commit
> message.
>
> thank you,
> byungchul
>
> ----->8-----
> From 8ece9a0482e74a39cd2e9165bf8eec1d04665fa9 Mon Sep 17 00:00:00 2001
> From: Byungchul Park <byungchul.park@xxxxxxx>
> Date: Fri, 25 Sep 2015 17:10:10 +0900
> Subject: [RESEND PATCH] sched: consider missed ticks when updating global cpu
> load
>
> in hrtimer_interrupt(), the first tick_program_event() can be failed
> because the next timer could be already expired due to,
> (see the comment in hrtimer_interrupt())
>
> - tracing
> - long lasting callbacks
> - being scheduled away when running in a VM
>
> in the case that the first tick_program_event() is failed, the second
> tick_program_event() set the expired time to more than one tick later.
> then next tick can happen after more than one tick, even though tick is
> not stopped by e.g. NOHZ.
>
> when the next tick occurs, update_process_times() -> scheduler_tick()
> -> update_cpu_load_active() is performed, assuming the distance between
> last tick and current tick is 1 tick! it's wrong in this case. thus,
> this abnormal case should be considered in update_cpu_load_active().
>
> Signed-off-by: Byungchul Park <byungchul.park@xxxxxxx>
> ---
> kernel/sched/fair.c | 7 +++++--
> 1 file changed, 5 insertions(+), 2 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 4d5f97b..829282f 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -4356,12 +4356,15 @@ void update_cpu_load_nohz(void)
> */
> void update_cpu_load_active(struct rq *this_rq)
> {
> + unsigned long curr_jiffies = READ_ONCE(jiffies);
> + unsigned long pending_updates;
> unsigned long load = weighted_cpuload(cpu_of(this_rq));
> /*
> * See the mess around update_idle_cpu_load() / update_cpu_load_nohz().
> */
> - this_rq->last_load_update_tick = jiffies;
> - __update_cpu_load(this_rq, load, 1);
> + pending_updates = curr_jiffies - this_rq->last_load_update_tick;
> + this_rq->last_load_update_tick = curr_jiffies;
> + __update_cpu_load(this_rq, load, pending_updates);
> }

That's right but __update_cpu_load() doesn't handle correctly pending updates
with non-zero loads. Currently, pending updates are wheeled through decay_load_missed()
that assume it's all about idle load.

But in the cases you've enumerated, as well as in the nohz full case, missed pending
updates can be about buzy loads.

I think we need to fix update_cpu_load() to handle that first, or your fix is
going to make things worse.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/