[PATCH] sched/fair: Fix fixed point arithmetic width for shares and effective load

From: Dietmar Eggemann
Date: Mon Aug 22 2016 - 10:00:55 EST


Since commit 2159197d6677 ("sched/core: Enable increased load resolution
on 64-bit kernels") we now have two different fixed point units for
load.
shares in calc_cfs_shares() has 20 bit fixed point unit on 64-bit
kernels. Therefore use scale_load() on MIN_SHARES.
wl in effective_load() has 10 bit fixed point unit. Therefore use
scale_load_down() on tg->shares which has 20 bit fixed point unit on
64-bit kernels.

Signed-off-by: Dietmar Eggemann <dietmar.eggemann@xxxxxxx>
---

Besides the issue with load_above_capacity when it comes to different
fixed point units for load "[PATCH] sched/fair: Fix load_above_capacity
fixed point arithmetic width" there are similar issues for shares in
calc_cfs_shares() as well as wl in effective_load().

kernel/sched/fair.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 61d485421bed..18f80c4c7737 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2530,8 +2530,8 @@ static long calc_cfs_shares(struct cfs_rq *cfs_rq, struct task_group *tg)
if (tg_weight)
shares /= tg_weight;

- if (shares < MIN_SHARES)
- shares = MIN_SHARES;
+ if (shares < scale_load(MIN_SHARES))
+ shares = scale_load(MIN_SHARES);
if (shares > tg->shares)
shares = tg->shares;

@@ -5023,9 +5023,9 @@ static long effective_load(struct task_group *tg, int cpu, long wl, long wg)
* wl = S * s'_i; see (2)
*/
if (W > 0 && w < W)
- wl = (w * (long)tg->shares) / W;
+ wl = (w * (long)scale_load_down(tg->shares)) / W;
else
- wl = tg->shares;
+ wl = scale_load_down(tg->shares);

/*
* Per the above, wl is the new se->load.weight value; since
--
1.9.1