Re: [discussion]sched: a rough proposal to enable power saving in scheduler

From: Paul Turner
Date: Fri Aug 17 2012 - 05:00:10 EST


On Wed, Aug 15, 2012 at 11:02 AM, Arjan van de Ven
<arjan@xxxxxxxxxxxxxxx> wrote:
> On 8/15/2012 9:34 AM, Matthew Garrett wrote:
>> On Wed, Aug 15, 2012 at 01:05:38PM +0200, Peter Zijlstra wrote:
>>> On Mon, 2012-08-13 at 20:21 +0800, Alex Shi wrote:
>>>> It bases on the following assumption:
>>>> 1, If there are many task crowd in system, just let few domain cpus
>>>> running and let other cpus idle can not save power. Let all cpu take the
>>>> load, finish tasks early, and then get into idle. will save more power
>>>> and have better user experience.
>>>
>>> I'm not sure this is a valid assumption. I've had it explained to me by
>>> various people that race-to-idle isn't always the best thing. It has to
>>> do with the cost of switching power states and the duration of execution
>>> and other such things.
>>
>> This is affected by Intel's implementation - if there's a single active
>
> not just intel.. also AMD
> basically everyone who has the memory controller in the cpu package will end up with
> a restriction very similar to this.
>

I think this is circular to discussion previously held on this topic.
This preference is arch specific; we need to reduce the set of inputs
to a sensible, actionable set, and plumb that so that the architecture
and not the scheduler can supply this preference.

That you believe 100-300us is actually the tipping point vs power
migration cost is probably in itself one of the most useful replies
I've seen on this topic in all of the last few rounds of discussion
its been through. It suggests we could actually parameterize this in
a manner similar to wake-up migration cost; with a minimum usage
average for which it's worth spilling to an idle sibling.

- Paul

> (this is because the exit-from-self-refresh latency is pretty high.. at least in DDR2/3)
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/