[PATCH] sched/fair: Don't pull task if local group is more loaded than busiest group
From: Aubrey Li
Date: Wed Mar 25 2020 - 08:57:48 EST
A huge number of load imbalance was observed when the local group
type is group_fully_busy, and the average load of local group is
greater than the selected busiest group, so the imbalance calculation
returns a negative value actually. Fix this problem by comparing the
average load before local group type check.
Signed-off-by: Aubrey Li <aubrey.li@xxxxxxxxxxxxxxx>
---
kernel/sched/fair.c | 14 +++++++-------
1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index c1217bf..c524369 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -8862,17 +8862,17 @@ static struct sched_group *find_busiest_group(struct lb_env *env)
goto out_balanced;
/*
+ * If the local group is more loaded than the selected
+ * busiest group don't try to pull any tasks.
+ */
+ if (local->avg_load >= busiest->avg_load)
+ goto out_balanced;
+
+ /*
* When groups are overloaded, use the avg_load to ensure fairness
* between tasks.
*/
if (local->group_type == group_overloaded) {
- /*
- * If the local group is more loaded than the selected
- * busiest group don't try to pull any tasks.
- */
- if (local->avg_load >= busiest->avg_load)
- goto out_balanced;
-
/* XXX broken for overlapping NUMA groups */
sds.avg_load = (sds.total_load * SCHED_CAPACITY_SCALE) /
sds.total_capacity;
--
2.7.4