[PATCH] sched: Count loadavg under rq::lock in calc_load_nohz_start()

From: Kirill Tkhai
Date: Fri Jul 07 2017 - 13:07:24 EST


Since calc_load_fold_active() reads two variables (nr_running
and nr_uninterruptible) it may race with parallel try_to_wake_up().
Thus, it must be called under rq::lock to prevent that. Also
put calc_load_migrate() under the lock for uniformity.

I observed machine with negative calc_load_tasks on kernel 3.10,
and this seems to be the reason.

Signed-off-by: Kirill Tkhai <ktkhai@xxxxxxxxxxxxx>
---
kernel/sched/core.c | 2 +-
kernel/sched/loadavg.c | 3 +++
2 files changed, 4 insertions(+), 1 deletion(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index d3d39a283beb..92b3512bdcfc 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -5657,9 +5657,9 @@ int sched_cpu_dying(unsigned int cpu)
}
migrate_tasks(rq, &rf);
BUG_ON(rq->nr_running != 1);
+ calc_load_migrate(rq);
rq_unlock_irqrestore(rq, &rf);

- calc_load_migrate(rq);
update_max_interval();
nohz_balance_exit_idle(cpu);
hrtick_clear(rq);
diff --git a/kernel/sched/loadavg.c b/kernel/sched/loadavg.c
index f14716a3522f..723349844990 100644
--- a/kernel/sched/loadavg.c
+++ b/kernel/sched/loadavg.c
@@ -183,13 +183,16 @@ static inline int calc_load_read_idx(void)
void calc_load_nohz_start(void)
{
struct rq *this_rq = this_rq();
+ struct rq_flags rf;
long delta;

/*
* We're going into NO_HZ mode, if there's any pending delta, fold it
* into the pending NO_HZ delta.
*/
+ rq_lock(this_rq, &rf);
delta = calc_load_fold_active(this_rq, 0);
+ rq_unlock(this_rq, &rf);
if (delta) {
int idx = calc_load_write_idx();