Re: [PATCH v5 00/23] Introduce runtime modifiable Energy Model

From: Qais Yousef
Date: Sun Dec 17 2023 - 13:23:12 EST


Hi Lukasz

On 11/29/23 11:08, Lukasz Luba wrote:
> Hi all,
>
> This patch set adds a new feature which allows to modify Energy Model (EM)
> power values at runtime. It will allow to better reflect power model of
> a recent SoCs and silicon. Different characteristics of the power usage
> can be leveraged and thus better decisions made during task placement in EAS.
>
> It's part of feature set know as Dynamic Energy Model. It has been presented
> and discussed recently at OSPM2023 [3]. This patch set implements the 1st
> improvement for the EM.

Thanks. The problem of EM accuracy has been observed in the field and would be
nice to have a mainline solution for it. We carry our own out-of-tree change to
enable modifying the EM.

>
> The concepts:
> 1. The CPU power usage can vary due to the workload that it's running or due
> to the temperature of the SoC. The same workload can use more power when the
> temperature of the silicon has increased (e.g. due to hot GPU or ISP).
> In such situation the EM can be adjusted and reflect the fact of increased
> power usage. That power increase is due to static power
> (sometimes called simply: leakage). The CPUs in recent SoCs are different.
> We have heterogeneous SoCs with 3 (or even 4) different microarchitectures.
> They are also built differently with High Performance (HP) cells or
> Low Power (LP) cells. They are affected by the temperature increase
> differently: HP cells have bigger leakage. The SW model can leverage that
> knowledge.

One thing I'm not sure about is that in practice temperature of the SoC can
vary a lot in a short period of time. What is the expectation here? I can see
this useful in practice only if we average it over a window of time. Following
it will be really hard. Big variations can happen in few ms scales.

Driver interface for this part makes sense; as thermal framework will likely to
know how feed things back to EM table, if necessary.

>
> 2. It is also possible to change the EM to better reflect the currently
> running workload. Usually the EM is derived from some average power values
> taken from experiments with benchmark (e.g. Dhrystone). The model derived
> from such scenario might not represent properly the workloads usually running
> on the device. Therefore, runtime modification of the EM allows to switch to
> a different model, when there is a need.

I didn't get how the new performance field is supposed to be controlled and
modified by users. A driver interface doesn't seem suitable as there's no
subsystem that knows the characteristic of the workload except userspace. In
Android we do have contextual info about what the current top-app to enable
modifying the capacities to match its characteristics.

>
> 3. The EM can be adjusted after boot, when all the modules are loaded and
> more information about the SoC is available e.g. chip binning. This would help
> to better reflect the silicon characteristics. Thus, this EM modification
> API allows it now. It wasn't possible in the past and the EM had to be
> 'set in stone'.
>
> More detailed explanation and background can be found in presentations
> during LPC2022 [1][2] or in the documentation patches.
>
> Some test results.
> The EM can be updated to fit better the workload type. In the case below the EM
> has been updated for the Jankbench test on Pixel6 (running v5.18 w/ mainline backports
> for the scheduler bits). The Jankbench was run 10 times for those two configurations,
> to get more reliable data.
>
> 1. Janky frames percentage
> +--------+-----------------+---------------------+-------+-----------+
> | metric | variable | kernel | value | perc_diff |
> +--------+-----------------+---------------------+-------+-----------+
> | gmean | jank_percentage | EM_default | 2.0 | 0.0% |
> | gmean | jank_percentage | EM_modified_runtime | 1.3 | -35.33% |
> +--------+-----------------+---------------------+-------+-----------+
>
> 2. Avg frame render time duration
> +--------+---------------------+---------------------+-------+-----------+
> | metric | variable | kernel | value | perc_diff |
> +--------+---------------------+---------------------+-------+-----------+
> | gmean | mean_frame_duration | EM_default | 10.5 | 0.0% |
> | gmean | mean_frame_duration | EM_modified_runtime | 9.6 | -8.52% |
> +--------+---------------------+---------------------+-------+-----------+
>
> 3. Max frame render time duration
> +--------+--------------------+---------------------+-------+-----------+
> | metric | variable | kernel | value | perc_diff |
> +--------+--------------------+---------------------+-------+-----------+
> | gmean | max_frame_duration | EM_default | 251.6 | 0.0% |
> | gmean | max_frame_duration | EM_modified_runtime | 115.5 | -54.09% |
> +--------+--------------------+---------------------+-------+-----------+
>
> 4. OS overutilized state percentage (when EAS is not working)
> +--------------+---------------------+------+------------+------------+
> | metric | wa_path | time | total_time | percentage |
> +--------------+---------------------+------+------------+------------+
> | overutilized | EM_default | 1.65 | 253.38 | 0.65 |
> | overutilized | EM_modified_runtime | 1.4 | 277.5 | 0.51 |
> +--------------+---------------------+------+------------+------------+
>
> 5. All CPUs (Little+Mid+Big) power values in mW
> +------------+--------+---------------------+-------+-----------+
> | channel | metric | kernel | value | perc_diff |
> +------------+--------+---------------------+-------+-----------+
> | CPU | gmean | EM_default | 142.1 | 0.0% |
> | CPU | gmean | EM_modified_runtime | 131.8 | -7.27% |
> +------------+--------+---------------------+-------+-----------+

How did you modify the EM here? Did you change both performance and power
fields? How did you calculate the new ones?

Did you try to simulate any heating effect during the run if you're taking
temperature into account to modify the power? What was the variation like and
at what rate was the EM being updated in this case? I think Jankbench in
general wouldn't stress the SoC enough.

It'd be insightful to look at frequency residencies between the two runs and
power breakdown for each cluster if you have access to them. No worries if not!

My brain started to fail me somewhere around patch 15. I'll have another look
some time later in the week but generally looks good to me. If I have any
worries it is about how it can be used with the provided interfaces. Especially
expectations about managing fast thermal changes at the level you're targeting.


Thanks!

--
Qais Yousef