Hi Pierre
On Fri, 13 Sept 2024 at 10:58, Pierre Gondois <pierre.gondois@xxxxxxx> wrote:
(struct sg_lb_stats).idle_cpus is of type 'unsigned int'.
(local->idle_cpus - busiest->idle_cpus) can underflow to UINT_MAX
for instance, and max_t(long, 0, UINT_MAX) will output UINT_MAX.
Use lsub_positive() instead of max_t().
Have you faced the problem or this patch is based on code review ?
we have the below in sched_balance_find_src_group() that should ensure
that we have local->idle_cpus > busiest->idle_cpus
if (busiest->group_weight > 1 &&
local->idle_cpus <= (busiest->idle_cpus + 1)) {
/*
* If the busiest group is not overloaded
* and there is no imbalance between this and busiest
* group wrt idle CPUs, it is balanced. The imbalance
* becomes significant if the diff is greater than 1
* otherwise we might end up to just move the imbalance
* on another group. Of course this applies only if
* there is more than 1 CPU per group.
*/
goto out_balanced;
}
Fixes: 0b0695f2b34a ("sched/fair: Rework load_balance()")
cc: stable@xxxxxxxxxxxxxxx
Signed-off-by: Pierre Gondois <pierre.gondois@xxxxxxx>
---
kernel/sched/fair.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 9057584ec06d..6d9124499f52 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -10775,8 +10775,8 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s
* idle CPUs.
*/
env->migration_type = migrate_task;
- env->imbalance = max_t(long, 0,
- (local->idle_cpus - busiest->idle_cpus));
+ env->imbalance = local->idle_cpus;
+ lsub_positive(&env->imbalance, busiest->idle_cpus);
}
#ifdef CONFIG_NUMA
--
2.25.1