Re: [RFC PATCH v3 0/6] sched/cpufreq: Make schedutil energy aware

From: Peter Zijlstra
Date: Thu Oct 17 2019 - 05:50:29 EST


On Mon, Oct 14, 2019 at 04:50:24PM +0100, Douglas Raillard wrote:

> I posted some numbers based on a similar experiment on the v2 of that series that
> are still applicable:
>
> TL;DR the rt-app negative slack is divided by 1.75 by this series, with an
> increase of 3% in total energy consumption. There is a burst every 0.6s, and
> the average power consumption increase is proportional to the average number
> of bursts.
>
>
> The workload is an rt-app task ramping up from 5% to 75% util in one big step,
> pinned on a big core. The whole cycle is 0.6s long (0.3s at 5% followed by 0.3s at 75%).
> This cycle is repeated 20 times and the average of boosting is taken.
>
> The test device is a Google Pixel 3 (Qcom Snapdragon 845) phone.
> It has a lot more OPPs than a hikey 960, so gradations in boosting
> are better reflected on frequency selection.
>
> avg slack (higher=better):
> Average time between task sleep and its next periodic activation.
> See rt-app doc: https://github.com/scheduler-tools/rt-app/blob/9a50d76f726d7c325c82ac8c7ed9ed70e1c97937/doc/tutorial.txt#L631
>
> avg negative slack (lower in absolute value=better):
> Same as avg slack, but only taking into account negative values.
> Negative slack means a task activation did not have enough time to complete before the next
> periodic activation fired, which is what we want to avoid.
>
> boost energy overhead (lower=better):
> Extra power consumption induced by ramp boost, assuming continuous OPP space (infinite number of OPP)
> and single-CPU policies. In practice, fixed number of OPP decrease this value, and more CPU per policy increases it,
> since boost(policy) = max(boost(cpu) foreach cpu of policy)).
>
> Without ramp boost:
> +--------------------+--------------------+
> |avg slack (us) |avg negative slack |
> | |(us) |
> +--------------------+--------------------+
> |6598.72 |-10217.13 |
> |6595.49 |-10200.13 |
> |6613.72 |-10401.06 |
> |6600.29 |-9860.872 |
> |6605.53 |-10057.64 |
> |6612.05 |-10267.50 |
> |6599.01 |-9939.60 |
> |6593.79 |-9445.633 |
> |6613.56 |-10276.75 |
> |6595.44 |-9751.770 |
> +--------------------+--------------------+
> |average |
> +--------------------+--------------------+
> |6602.76 |-10041.81 |
> +--------------------+--------------------+
>
>
> With ramp boost enabled:
> +--------------------+--------------------+--------------------+
> |boost energy |avg slack (us) |avg negative slack |
> |overhead (%) | |(us) |
> +--------------------+--------------------+--------------------+
> |3.05 |7148.93 |-5664.26 |
> |3.04 |7144.69 |-5667.77 |
> |3.05 |7149.05 |-5698.31 |
> |2.97 |7126.71 |-6040.23 |
> |3.02 |7140.28 |-5826.78 |
> |3.03 |7135.11 |-5749.62 |
> |3.05 |7140.24 |-5750.0 |
> |3.05 |7144.84 |-5667.04 |
> |3.07 |7157.30 |-5656.65 |
> |3.06 |7154.65 |-5653.76 |
> +--------------------+--------------------+--------------------+
> |average |
> +--------------------+--------------------+--------------------+
> |3.039000 |7144.18 |-5737.44 |
> +--------------------+--------------------+--------------------+
>
>
> The negative slack is due to missed activations while the utilization signals
> increase during the big utilization step. Ramp boost is designed to boost frequency during
> that phase, which materializes in 1.75 less negative slack, for an extra power
> consumption under 3%.

OK, so I think I see what it is doing, and why.

Normally we use (map_util_freq):

freq = C * max_freq * util / max ; C=1.25

But here, when util is increasing, we effectively increase our C to
allow picking a higher OPP. Because of that higher OPP we finish our
work sooner (avg slack increases) and miss our activation less often
(avg neg slack decreases).

Now, the thing is, we use map_util_freq() in more places, and should we
not reflect this increase in C for all of them? That is, why is this
patch changing get_next_freq() and not map_util_freq().

I don't think that question is answered in the Changelogs.

Exactly because it does change the energy consumption (it must) should
that not also be reflected in the EAS logic?

I'm still thinking about the exact means you're using to raise C; that
is, the 'util - util_est' as cost_margin. It hurts my brain still.