Re: [RFC][PATCH 02/14] sched: Simplify cpu_power initialization

From: Steven Rostedt
Date: Mon Mar 14 2011 - 17:36:11 EST


On Mon, Mar 14, 2011 at 04:06:15PM +0100, Peter Zijlstra wrote:
> The code in update_group_power() does what init_sched_groups_power()
> does and more, so remove the special init_ code and call the generic
> code instead.
>
> Signed-off-by: Peter Zijlstra <a.p.zijlstra@xxxxxxxxx>
> LKML-Reference: <new-submission>
> ---
> kernel/sched.c | 44 +++++---------------------------------------
> 1 file changed, 5 insertions(+), 39 deletions(-)
>
> Index: linux-2.6/kernel/sched.c
> ===================================================================
> --- linux-2.6.orig/kernel/sched.c
> +++ linux-2.6/kernel/sched.c
> @@ -6655,9 +6655,6 @@ cpu_attach_domain(struct sched_domain *s
> struct rq *rq = cpu_rq(cpu);
> struct sched_domain *tmp;
>
> - for (tmp = sd; tmp; tmp = tmp->parent)
> - tmp->span_weight = cpumask_weight(sched_domain_span(tmp));
> -
> /* Remove the sched domains which do not contribute to scheduling. */
> for (tmp = sd; tmp; ) {
> struct sched_domain *parent = tmp->parent;

This and ...

[ snip what was explained in change log ]

>
> /*
> @@ -7483,7 +7446,7 @@ static int __build_sched_domains(const s
> {
> enum s_alloc alloc_state = sa_none;
> struct s_data d;
> - struct sched_domain *sd;
> + struct sched_domain *sd, *tmp;
> int i;
> #ifdef CONFIG_NUMA
> d.sd_allnodes = 0;
> @@ -7506,6 +7469,9 @@ static int __build_sched_domains(const s
> sd = __build_book_sched_domain(&d, cpu_map, attr, sd, i);
> sd = __build_mc_sched_domain(&d, cpu_map, attr, sd, i);
> sd = __build_smt_sched_domain(&d, cpu_map, attr, sd, i);
> +
> + for (tmp = sd; tmp; tmp = tmp->parent)
> + tmp->span_weight = cpumask_weight(sched_domain_span(tmp));
> }
>
> for_each_cpu(i, cpu_map) {
>

this, looks like a separate change than what was explained in the change
log. Did you forget a "quilt new" between these two changes?

-- Steve

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/