[PATCH 27/30] sched: fix mult overflow

From: Peter Zijlstra
Date: Fri Jun 27 2008 - 08:08:16 EST


From: Srivatsa Vaddagiri <vatsa@xxxxxxxxxxxxxxxxxx>

It was observed these mults can overflow.

Signed-off-by: Srivatsa Vaddagiri <vatsa@xxxxxxxxxxxxxxxxxx>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@xxxxxxxxx>
---
kernel/sched_fair.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)

Index: linux-2.6/kernel/sched_fair.c
===================================================================
--- linux-2.6.orig/kernel/sched_fair.c
+++ linux-2.6/kernel/sched_fair.c
@@ -1518,7 +1518,7 @@ load_balance_fair(struct rq *this_rq, in
struct cfs_rq *busiest_cfs_rq = tg->cfs_rq[busiest_cpu];
unsigned long busiest_h_load = busiest_cfs_rq->h_load;
unsigned long busiest_weight = busiest_cfs_rq->load.weight;
- long rem_load, moved_load;
+ u64 rem_load, moved_load;

/*
* empty group
@@ -1526,8 +1526,8 @@ load_balance_fair(struct rq *this_rq, in
if (!busiest_cfs_rq->task_weight)
continue;

- rem_load = rem_load_move * busiest_weight;
- rem_load /= busiest_h_load + 1;
+ rem_load = (u64)rem_load_move * busiest_weight;
+ rem_load = div_u64(rem_load, busiest_h_load + 1);

moved_load = __load_balance_fair(this_rq, this_cpu, busiest,
rem_load, sd, idle, all_pinned, this_best_prio,
@@ -1537,7 +1537,7 @@ load_balance_fair(struct rq *this_rq, in
continue;

moved_load *= busiest_h_load;
- moved_load /= busiest_weight + 1;
+ moved_load = div_u64(moved_load, busiest_weight + 1);

rem_load_move -= moved_load;
if (rem_load_move < 0)

--

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/