[PATCH 2/7] sched/fair: calculate runnable_weight slightly differently
From: Josef Bacik
Date: Fri Jul 14 2017 - 09:22:01 EST
From: Josef Bacik <jbacik@xxxxxx>
Our runnable_weight currently looks like this
runnable_weight = shares * runnable_load_avg / load_avg
The goal is to scale the runnable weight for the group based on its runnable to
load_avg ratio. The problem with this is it biases us towards tasks that never
go to sleep. Tasks that go to sleep are going to have their runnable_load_avg
decayed pretty hard, which will drastically reduce the runnable weight of groups
with interactive tasks. To solve this imbalance we tweak this slightly, so in
the ideal case it is still the above, but in the interactive case it is
runnable_weight = shares * runnable_weight / load_weight
which will make the weight distribution fairer between interactive and
non-interactive groups.
Signed-off-by: Josef Bacik <jbacik@xxxxxx>
---
kernel/sched/fair.c | 12 +++++++++---
1 file changed, 9 insertions(+), 3 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 326bc55..5d4489e 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2880,9 +2880,15 @@ static void update_cfs_group(struct sched_entity *se)
* Note: we need to deal with very sporadic 'runnable > load' cases
* due to numerical instability.
*/
- runnable = shares * gcfs_rq->avg.runnable_load_avg;
- if (runnable)
- runnable /= max(gcfs_rq->avg.load_avg, gcfs_rq->avg.runnable_load_avg);
+ runnable = shares * max(scale_load_down(gcfs_rq->runnable_weight),
+ gcfs_rq->avg.runnable_load_avg);
+ if (runnable) {
+ long divider = max(gcfs_rq->avg.load_avg,
+ scale_load_down(gcfs_rq->load.weight));
+ divider = max_t(long, 1, divider);
+ runnable /= divider;
+ }
+ runnable = clamp_t(long, runnable, MIN_SHARES, shares);
reweight_entity(cfs_rq_of(se), se, shares, runnable);
}
--
2.9.3