Re: [PATCH V3 3/9] cpufreq: Cap the default transition delay value to 10 ms
From: Leonard Crestez
Date: Tue Jul 25 2017 - 07:55:01 EST
On Wed, 2017-07-19 at 15:42 +0530, Viresh Kumar wrote:
> If transition_delay_us isn't defined by the cpufreq driver, the default
> value of transition delay (time after which the cpufreq governor will
> try updating the frequency again) is currently calculated by multiplying
> transition_latency (nsec) with LATENCY_MULTIPLIER (1000) and then
> converting this time to usec. That gives the exact same value as
> transition_latency, just that the time unit is usec instead of nsec.
>
> With acpi-cpufreq for example, transition_latency is set to around 10
> usec and we get transition delay as 10 ms. Which seems to be a
> reasonable amount of time to reevaluate the frequency again.
>
> But for platforms where frequency switching isn't that fast (like ARM),
> the transition_latency varies from 500 usec to 3 ms, and the transition
> delay becomes 500 ms to 3 seconds. Of course, that is a pretty bad
> default value to start with.
>
> We can try to come across a better formula (instead of multiplying with
> LATENCY_MULTIPLIER) to solve this problem, but will that be worth it ?
>
> This patch tries a simple approach and caps the maximum value of default
> transition delay to 10 ms. Of course, userspace can still come in and
> change this value anytime or individual drivers can rather provide
> transition_delay_us instead.
>
> Signed-off-by: Viresh Kumar <viresh.kumar@xxxxxxxxxx>
> ---
> Âdrivers/cpufreq/cpufreq.c | 15 +++++++++++++--
> Â1 file changed, 13 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
> index c426d21822f7..d00cde871c15 100644
> --- a/drivers/cpufreq/cpufreq.c
> +++ b/drivers/cpufreq/cpufreq.c
> @@ -532,8 +532,19 @@ unsigned int cpufreq_policy_transition_delay_us(struct cpufreq_policy *policy)
> Â return policy->transition_delay_us;
> Â
> Â latency = policy->cpuinfo.transition_latency / NSEC_PER_USEC;
> - if (latency)
> - return latency * LATENCY_MULTIPLIER;
> + if (latency) {
> + /*
> + Â* For platforms that can change the frequency very fast (< 10
> + Â* us), the above formula gives a decent transition delay. But
> + Â* for platforms where transition_latency is in milliseconds, it
> + Â* ends up giving unrealistic values.
> + Â*
> + Â* Cap the default transition delay to 10 ms, which seems to be
> + Â* a reasonable amount of time after which we should reevaluate
> + Â* the frequency.
> + Â*/
> + return min(latency * LATENCY_MULTIPLIER, (unsigned int)10000);
> + }
> Â
> Â return LATENCY_MULTIPLIER;
> Â}
This patch made it's way into linux-next and it seems to cause imx socs
to almost always hang around their max frequency with the ondemand
governor, even when almost completely idle. The lowest frequency is
never reached. This seems wrong?
This driver calculates transition_latency at probe time, the value is
not terribly accurate but it reaches values like latency = 109 us, so
this patch clamps it at about 10% of the value.
It's worth noting that the default IMX config has HZ=100 and
NO_HZ_IDLE=y, so maybe doing idle checks at a rate comparable to the
jiffie tick screws stuff up? I don't understand what ondemand is trying
to do.