[PATCH 07/14] sched/topology: Optimize build_group_mask()

From: Peter Zijlstra
Date: Fri Apr 28 2017 - 09:34:31 EST


The group mask is always used in intersection with the group cpus. So,
when building the group mask, we don't have to care about cpus that are
not part of the group.

Cc: Ingo Molnar <mingo@xxxxxxxxxx>
Cc: Mike Galbraith <efault@xxxxxx>
Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
Cc: riel@xxxxxxxxxx
Signed-off-by: Lauro Ramos Venancio <lvenanci@xxxxxxxxxx>
Signed-off-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx>
Link: http://lkml.kernel.org/r/1492717903-5195-2-git-send-email-lvenanci@xxxxxxxxxx
---
kernel/sched/topology.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

--- a/kernel/sched/topology.c
+++ b/kernel/sched/topology.c
@@ -506,12 +506,12 @@ enum s_alloc {
*/
static void build_group_mask(struct sched_domain *sd, struct sched_group *sg)
{
- const struct cpumask *span = sched_domain_span(sd);
+ const struct cpumask *sg_span = sched_group_cpus(sg);
struct sd_data *sdd = sd->private;
struct sched_domain *sibling;
int i;

- for_each_cpu(i, span) {
+ for_each_cpu(i, sg_span) {
sibling = *per_cpu_ptr(sdd->sd, i);
if (!cpumask_test_cpu(i, sched_domain_span(sibling)))
continue;