[PATCH] sched: Fix nr_uninterruptible race causing increasing load average
From: Phil Auld
Date: Wed Jul 07 2021 - 15:05:13 EST
On systems with weaker memory ordering (e.g. power) commit dbfb089d360b
("sched: Fix loadavg accounting race") causes increasing values of load
average (via rq->calc_load_active and calc_load_tasks) due to the wakeup
CPU not always seeing the write to task->sched_contributes_to_load in
__schedule(). Missing that we fail to decrement nr_uninterruptible when
waking up a task which incremented nr_uninterruptible when it slept.
The rq->lock serialization is insufficient across different rq->locks.
Add smp_wmb() to schedule and smp_rmb() before the read in
ttwu_do_activate().
Fixes: dbfb089d360b ("sched: Fix loadavg accounting race")
Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
Cc: Ingo Molnar <mingo@xxxxxxxxxx>
Cc: Juri Lelli <juri.lelli@xxxxxxxxxx>
Cc: Vincent Guittot <vincent.guittot@xxxxxxxxxx>
Cc: Waiman Long <longman@xxxxxxxxxx>
Cc: <stable@xxxxxxxxxxxxxxx>
Signed-off-by: Phil Auld <pauld@xxxxxxxxxx>
---
kernel/sched/core.c | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 4ca80df205ce..ced7074716eb 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2992,6 +2992,8 @@ ttwu_do_activate(struct rq *rq, struct task_struct *p, int wake_flags,
lockdep_assert_held(&rq->lock);
+ /* Pairs with smp_wmb in __schedule() */
+ smp_rmb();
if (p->sched_contributes_to_load)
rq->nr_uninterruptible--;
@@ -5084,6 +5086,11 @@ static void __sched notrace __schedule(bool preempt)
!(prev_state & TASK_NOLOAD) &&
!(prev->flags & PF_FROZEN);
+ /*
+ * Make sure the previous write is ordered before p->on_rq etc so
+ * that it is visible to other cpus in the wakeup path (ttwu_do_activate()).
+ */
+ smp_wmb();
if (prev->sched_contributes_to_load)
rq->nr_uninterruptible++;
--
2.18.0