Re: power increase issue on light load

From: Nikhil Rao
Date: Tue Jun 28 2011 - 22:32:22 EST


On Tue, Jun 28, 2011 at 10:13 AM, Nikhil Rao <ncrao@xxxxxxxxxx> wrote:
> On Tue, Jun 28, 2011 at 7:59 AM, Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote:
>> On Tue, 2011-06-28 at 08:02 +0800, Alex,Shi wrote:
>>> > >
>>> > > What happens if you try something like the below. Increased imbalance
>>> > > might lead to more load-balance action, which might lead to more task
>>> > > migration/waking up of cpus etc.
>>> > >
>>> > > If the below makes any difference, Nikhil's changes have a funny that
>>> > > needs to be caught.
>>> >
>>> > Yes, it most remove the commit effect, So the power recovered.
>>> >
>>> > In fact the only suspicious I found is large imbalance, but that is
>>> > the commit want ...
>>>
>>> Any further comments for this?
>>
>> I had a look over all that stuff, but I couldn't find an obvious unit
>> mis-match in any of the imbalance code. Nikhil any clue?
>>
>
> Sorry for the late reply. My mailbox filters failed me :-(
>
> Alex -- I'm looking into this issue. Will get back to you soon.
>

Looking at the schedstat data Alex posted:
- Distribution of load balances across cores looks about the same.
- Load balancer does more idle balances on 3.0-rc4 as compared to
2.6.39 on SMT and NUMA domains. Busy and newidle balances are a mixed
bag.
- I see far fewer affine wakeups on 3.0-rc4 as compared to 2.6.39.
About half as many affine wakeups on SMT and about a quarter as many
on NUMA.

I'm investigating the impact of the load resolution patchset on
effective load and wake affine calculations. This seems to be the most
obvious difference from the schedstat data.

Alex -- I have a couple of questions about your test setup and results.
- What is the impact on throughput of these benchmarks?
- Would it be possible to get a "perf sched" trace on these two kernels?
- I'm assuming the three sched domains are SMT, MC and NUMA. Is that
right? Do you have any powersavings balance or special sched domain
flags enabled?
- Are you using group scheduling? If so, what does your setup look like?

-Thanks,
Nikhil
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/