Re: [PATCH 11/15] sched: Pass unlimited __cpu_power information toupper domain level groups

From: Peter Zijlstra
Date: Tue Aug 25 2009 - 04:31:19 EST


On Tue, 2009-08-25 at 13:34 +0530, Balbir Singh wrote:
> * Peter Zijlstra <peterz@xxxxxxxxxxxxx> [2009-08-25 09:11:14]:
>
> > On Mon, 2009-08-24 at 23:49 +0530, Balbir Singh wrote:
> >
> > > That reminds me, accounting is currently broken and should be based on
> > > APER/MPERF (Power gets it right - based on SPURR).
> >
> > What accounting?
> >
>
>
> We need scaled time accounting for x86 (see *timescaled). By scaled
> accounting I mean ratio of APERF/MPERF

Runtime accounting? I don't see why that would need to be scaled by a/m,
we're accounting wall-time, not a virtual time quantity that represents
work.

> > > > The trouble is that cpu_power is now abused for placement decisions too,
> > > > and that needs to be taken out.
> > >
> > > OK.. so you propose extending the static cpu_power to dynamic
> > > cpu_power but based on current topology?
> >
> > Right, so cpu_power is primarily used to normalize domain weight in the
> > load-balancer.
> >
> > Suppose a 4 core machine with 1 unplugged core:
> >
> > 0,1,3
> >
> > 0,1 3
> >
> > The sd-0,1 will have cpu_power 2048, while the sd-3 will have 1024, this
> > allowed find_busiest_group() for sd-0,1,3 to pick the one which is
> > relatively most overloaded.
> >
> > Supposing 3, 2, 2 (nice0) tasks on these cores, the domain weight of
> > sd-0,1 is 5*1024 and sd-3 is 2*1024, normalized that becomes 5/2 and 2
> > resp. which clearly shows sd-0,1 to be the busiest of the pair.
> >
> > Now back in the days Nick wrote all this, he did the cpu_power hack for
> > SMT which sets the combined cpu_power of 2 threads (that's all we had
> > back then) to 1024, because two threads share 1 core, and are roughly as
> > fast.
> >
> > He then also used this to influence task placement, preferring to move
> > tasks to another sibling domain before getting the second thread active,
> > this worked.
> >
> > Then multi-core with shared caches came along and people did the same
> > trick for mc power save in order to get that placement stuff, but that
> > horribly broke the load-balancer normalization.
> >
> > Now comes multi-node, and people asking for more elaborate placement
> > strategies and all this starts creaking like a ghost house about to
> > collapse.
> >
> > Therefore I want cpu_power back to load normalization only, and do the
> > placement stuff with something else.
> >

> What do you have in mind for the something else? Aren't normalization
> and placement two sides of the same coin? My concern is that load
> normalization might give different recommendations from the placement
> stuff, then what do we do?

They are related but not the same. People have been asking for placement
policies that exceed the relation.

Also the current ties between them are already strained on multi-level
placement policies.

So what I'd like to see is move all placement decisions to SD_flags and
restore cpu_power to a straight sum of work capacity.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/