Re: [PATCH 3/3] intel_pstate: Clean up get_target_pstate_use_performance()

From: Srinivas Pandruvada
Date: Mon May 09 2016 - 21:25:25 EST


On Sat, 2016-05-07 at 01:47 +0200, Rafael J. Wysocki wrote:
> From: Rafael J. Wysocki <rafael.j.wysocki@xxxxxxxxx>
>
> The way the code in get_target_pstate_use_performance() is arranged
> and the comments in there are totally confusing, so modify them to
> reflect what's going on.
>
> The results of the computations should be the same as before.
>
> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@xxxxxxxxx>
Acked-by: Srinivas Pandruvada <srinivas.pandruvada@xxxxxxxxxxxxxxx>

> ---
> Âdrivers/cpufreq/intel_pstate.c |ÂÂÂ32 +++++++++++++-----------------
> --
> Â1 file changed, 13 insertions(+), 19 deletions(-)
>
> Index: linux-pm/drivers/cpufreq/intel_pstate.c
> ===================================================================
> --- linux-pm.orig/drivers/cpufreq/intel_pstate.c
> +++ linux-pm/drivers/cpufreq/intel_pstate.c
> @@ -1241,43 +1241,37 @@ static inline int32_t get_target_pstate_
> Â
> Âstatic inline int32_t get_target_pstate_use_performance(struct
> cpudata *cpu)
> Â{
> - int32_t core_busy, max_pstate, current_pstate, sample_ratio;
> + int32_t perf_scaled, sample_ratio;
> Â u64 duration_ns;
> Â
> Â /*
> - Â* core_busy is the ratio of actual performance to max
> - Â* max_pstate is the max non turbo pstate available
> - Â* current_pstate was the pstate that was requested during
> - Â*Â the last sample period.
> - Â*
> - Â* We normalize core_busy, which was our actual percent
> - Â* performance to what we requested during the last sample
> - Â* period. The result will be a percentage of busy at a
> - Â* specified pstate.
> + Â* perf_scaled is the average performance during the last
> sampling
> + Â* period (in percent) scaled by the ratio of the P-state
> requested
> + Â* last time to the maximum P-state.ÂÂThat measures the
> system's
> + Â* response to the previous P-state selection.
> Â Â*/
> - core_busy = 100 * cpu->sample.core_avg_perf;
> - max_pstate = cpu->pstate.max_pstate_physical;
> - current_pstate = cpu->pstate.current_pstate;
> - core_busy = mul_fp(core_busy, div_fp(max_pstate,
> current_pstate));
> + perf_scaled = div_fp(cpu->pstate.max_pstate_physical,
> + ÂÂÂÂÂcpu->pstate.current_pstate);
> + perf_scaled = mul_fp(perf_scaled, 100 * cpu-
> >sample.core_avg_perf);
> Â
> Â /*
> Â Â* Since our utilization update callback will not run unless
> we are
> Â Â* in C0, check if the actual elapsed time is significantly
> greater (3x)
> Â Â* than our sample interval.ÂÂIf it is, then we were idle
> for a long
> - Â* enough period of time to adjust our busyness.
> + Â* enough period of time to adjust our performance metric.
> Â Â*/
> Â duration_ns = cpu->sample.time - cpu->last_sample_time;
> Â if ((s64)duration_ns > pid_params.sample_rate_ns * 3) {
> Â sample_ratio = div_fp(pid_params.sample_rate_ns,
> duration_ns);
> - core_busy = mul_fp(core_busy, sample_ratio);
> + perf_scaled = mul_fp(perf_scaled, sample_ratio);
> Â } else {
> Â sample_ratio = div_fp(100 * cpu->sample.mperf, cpu-
> >sample.tsc);
> Â if (sample_ratio < int_tofp(1))
> - core_busy = 0;
> + perf_scaled = 0;
> Â }
> Â
> - cpu->sample.busy_scaled = core_busy;
> - return cpu->pstate.current_pstate - pid_calc(&cpu->pid,
> core_busy);
> + cpu->sample.busy_scaled = perf_scaled;
> + return cpu->pstate.current_pstate - pid_calc(&cpu->pid,
> perf_scaled);
> Â}
> Â
> Âstatic inline void intel_pstate_update_pstate(struct cpudata *cpu,
> int pstate)
>