Re: [PATCH v3 0/6] sched: Extend sched_mc/smt_framework

From: Vaidyanathan Srinivasan
Date: Thu Mar 19 2009 - 11:16:30 EST


* Gautham R Shenoy <ego@xxxxxxxxxx> [2009-03-18 14:52:17]:

> Hi,
>
> I am reposting the iteration 3 of the patch series that extends the existing
> sched_smt_/mc_power_savings framework to work on platforms
> that have on-chip memory controllers making each of the cpu-package
> a 'node'. I have rebased this patch series against 2.6.29-rc8.
>
> Changes from V2: (Found here: --> http://lkml.org/lkml/2009/3/3/109)
> - Patches have been split up in an incremental manner for easy review.
> - Fixed comments for some variables.
> - Renamed some variables to better reflect their usage.
>
> Changes from V1: (Found here: --> http://lkml.org/lkml/2009/2/16/221)
> - Added comments to explain power-saving part in find_busiest_group()
> - Added comments for the different sched_domain levels.
>
> Background
> ------------------------------------------------------------------
> On machines with on-chip memory controller, each physical CPU
> package forms a NUMA node and the CPU level sched_domain will have
> only one group. This prevents any form of power saving balance across
> these nodes. Enabling the sched_mc_power_savings tunable to work as
> designed on these new single CPU NUMA node machines will help task
> consolidation and save power as we did in other multi core multi
> socket platforms.
>
> Consolidation across NODES have implications of cross-node memory
> access and other NUMA locality issues. Even under such constraints
> there could be scope for power savings vs performance tradeoffs and
> hence making the sched_mc_powersavings work as expected on these
> platform is justified.

If the workload threads share lots of data from cache, then
consolidating them will improve cache sharing at the last level cache
in the package. If most of the working set fits the on chip cache,
then the cross-node reference latencies will be effectively hidden.

> sched_mc/smt_power_savings is still a tunable and power savings benefits
> and performance would vary depending on the workload and the system
> topology and hardware features.

In your results we can see significant performance degradation for
marginal power savings when sibling threads are used to run workloads.
Kernbench is perhaps cpu intensive and did not leave many stall cycles
in the processor for the sibling thread to benefit.

Some other workloads that experience stalls due to memory references
may not see such degradation when run on sibling threads.

--Vaidy
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/