[PATCH 3.16 182/204] sched/topology: Simplify build_overlap_sched_groups()
From: Ben Hutchings
Date: Thu Dec 28 2017 - 12:22:11 EST
3.16.52-rc1 review patch. If anyone has any objections, please let me know.
------------------
From: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
commit 91eaed0d61319f58a9f8e43d41a8cbb069b4f73d upstream.
Now that the first group will always be the previous domain of this
@cpu this can be simplified.
In fact, writing the code now removed should've been a big clue I was
doing it wrong :/
Signed-off-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx>
Cc: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx>
Cc: Mike Galbraith <efault@xxxxxx>
Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
Cc: linux-kernel@xxxxxxxxxxxxxxx
Signed-off-by: Ingo Molnar <mingo@xxxxxxxxxx>
[bwh: Backported to 3.16: adjust filename, context]
Signed-off-by: Ben Hutchings <ben@xxxxxxxxxxxxxxx>
---
kernel/sched/core.c | 13 ++-----------
1 file changed, 2 insertions(+), 11 deletions(-)
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -5869,7 +5869,7 @@ static void init_overlap_sched_group(str
static int
build_overlap_sched_groups(struct sched_domain *sd, int cpu)
{
- struct sched_group *first = NULL, *last = NULL, *groups = NULL, *sg;
+ struct sched_group *first = NULL, *last = NULL, *sg;
const struct cpumask *span = sched_domain_span(sd);
struct cpumask *covered = sched_domains_tmpmask;
struct sd_data *sdd = sd->private;
@@ -5899,15 +5899,6 @@ build_overlap_sched_groups(struct sched_
init_overlap_sched_group(sd, sg);
- /*
- * Make sure the first group of this domain contains the
- * canonical balance cpu. Otherwise the sched_domain iteration
- * breaks. See update_sg_lb_stats().
- */
- if ((!groups && cpumask_test_cpu(cpu, sg_span)) ||
- group_balance_cpu(sg) == cpu)
- groups = sg;
-
if (!first)
first = sg;
if (last)
@@ -5915,7 +5906,7 @@ build_overlap_sched_groups(struct sched_
last = sg;
last->next = first;
}
- sd->groups = groups;
+ sd->groups = first;
return 0;