Re: [RFCv3 PATCH 30/48] sched: Calculate energy consumption of sched_group

From: Dietmar Eggemann
Date: Mon Mar 23 2015 - 16:21:56 EST


On 23/03/15 16:47, Peter Zijlstra wrote:
On Mon, Mar 16, 2015 at 02:15:46PM +0000, Morten Rasmussen wrote:
You are absolutely right. The current code is broken for system
topologies where all cpus share the same clock source. To be honest, it
is actually worse than that and you already pointed out the reason. We
don't have a way of representing top level contributions to power
consumption in RFCv3, as we don't have sched_group spanning all cpus in
single cluster system. For example, we can't represent L2 cache and
interconnect power consumption on such systems.

In RFCv2 we had a system wide sched_group dangling by itself for that
purpose. We chose to remove that in this rewrite as it led to messy
code. In my opinion, a more elegant solution is to introduce an
additional sched_domain above the current top level which has a single
sched_group spanning all cpus in the system. That should fix the
SD_SHARE_CAP_STATES problem and allow us to attach power data for the
top level.

Maybe remind us why this needs to be tied to sched_groups ? Why can't we
attach the energy information to the domains?

Currently on our 2 cluster (big.LITTLE) system (cluster0: big cpus, cluster1: little cpus) we attach energy information onto all sg's in MC (cpu/core related energy data) and DIE sd level (cluster related energy data).

For an MC level (cpus sharing the same u-arch) attaching the energy information onto the sd is clearly much easier then attaching it onto the individual sg's.

But on DIE level when we want to figure out the cluster energy data for a cluster represented by an sg other than the first sg (sg0) than we would have to access its cluster energy data via the DIE sd of one of the cpus of this cluster. I haven't seen code actually doing that in CFS.

IMHO, the current code is always iterating over the sg's of the sd and accessing either sg (sched_group) or sg->sgc (sched_group_capacity) data. Our energy data follows the sched_group_capacity example.

There is an additional problem with groups you've not yet discovered and
that is overlapping groups. Certain NUMA topologies result in this.
There the sum of cpus over the groups is greater than the total cpus in
the domain.

Yeah, we haven't tried EAS on such a system nor did we enable FORCE_SD_OVERLAP sched feature for a long time.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/