Re: [RFC][PATCH 7/7] cpufreq: intel_pstate: Change P-state selection algorithm for Core
From: Srinivas Pandruvada
Date: Mon Aug 22 2016 - 14:53:51 EST
Hi Doug,
I am not able to apply this patch. Can you send as a patch on top of
Rafael's RFC 7/7. Since test takes long time, I want to apply correct
patch.
Thanks,
Srinivas
On Sat, 2016-08-13 at 08:59 -0700, Doug Smythies wrote:
> On 2016.08.05 17:02 Rafael J. Wysocki wrote:
> >
> > >
> > > On 2016.08.03 21:19 Doug Smythies wrote:
> > > >
> > > > On 2016.07.31 16:49 Rafael J. Wysocki wrote:
> > > >
> > > > The PID-base P-state selection algorithm used by intel_pstate
> > > > for
> > > > Core processors is based on very weak foundations.
> > > ...[cut]...
> > >
> > > >
> > > > +static inline int32_t get_target_pstate_default(struct cpudata
> > > > *cpu)
> > > > +{
> > > > + struct sample *sample = &cpu->sample;
> > > > + int32_t busy_frac;
> > > > + int pstate;
> > > > +
> > > > + busy_frac = div_fp(sample->mperf, sample->tsc);
> > > > + sample->busy_scaled = busy_frac * 100;
> > > > +
> > > > + if (busy_frac < cpu->iowait_boost)
> > > > + busy_frac = cpu->iowait_boost;
> > > > +
> > > > + cpu->iowait_boost >>= 1;
> > > > +
> > > > + pstate = cpu->pstate.turbo_pstate;
> > > > + return fp_toint((pstate + (pstate >> 2)) * busy_frac);
> > > > +}
> > > > +
> My previous replies (and see below) have suggested that some
> filtering
> is needed on the target pstate, otherwise, and dependant on the type
> of
> workload, it tends to oscillate.
>
> I added the IIR (Infinite Impulse Response) filter that I have
> suggested in the past:
>
> diff --git a/drivers/cpufreq/intel_pstate.c
> b/drivers/cpufreq/intel_pstate.c
> index c43ef55..262ec5f 100644
> --- a/drivers/cpufreq/intel_pstate.c
> +++ b/drivers/cpufreq/intel_pstate.c
> @@ -98,6 +98,7 @@ static inline u64 div_ext_fp(u64 x, u64 y)
> Â * @tsc:ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂDifference of time stamp counter between last
> and
> Â *ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂcurrent sample
> Â * @time:ÂÂÂÂÂÂÂÂÂÂÂÂÂÂCurrent time from scheduler
> + * @target:ÂÂÂÂÂÂÂÂÂÂÂÂtarget pstate filtered.
> Â *
> Â * This structure is used in the cpudata structure to store
> performance sample
> Â * data for choosing next P State.
> @@ -108,6 +109,7 @@ struct sample {
> ÂÂÂÂÂÂÂÂu64 aperf;
> ÂÂÂÂÂÂÂÂu64 mperf;
> ÂÂÂÂÂÂÂÂu64 tsc;
> +ÂÂÂÂÂÂÂu64 target;
> ÂÂÂÂÂÂÂÂu64 time;
> Â};
>
> @@ -1168,6 +1170,7 @@ static void intel_pstate_get_cpu_pstates(struct
> cpudata *cpu)
> ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂpstate_funcs.get_vid(cpu);
>
> ÂÂÂÂÂÂÂÂintel_pstate_set_min_pstate(cpu);
> +ÂÂÂÂÂÂÂcpu->sample.target = int_tofp(cpu->pstate.min_pstate);
> Â}
>
> Âstatic inline void intel_pstate_calc_avg_perf(struct cpudata *cpu)
> @@ -1301,8 +1304,10 @@ static inline int32_t
> get_target_pstate_use_performance(struct cpudata *cpu)
> Âstatic inline int32_t get_target_pstate_default(struct cpudata *cpu)
> Â{
> ÂÂÂÂÂÂÂÂstruct sample *sample = &cpu->sample;
> +ÂÂÂÂÂÂÂint64_t scaled_gain, unfiltered_target;
> ÂÂÂÂÂÂÂÂint32_t busy_frac;
> ÂÂÂÂÂÂÂÂint pstate;
> +ÂÂÂÂÂÂÂu64 duration_ns;
>
> ÂÂÂÂÂÂÂÂbusy_frac = div_fp(sample->mperf, sample->tsc);
> ÂÂÂÂÂÂÂÂsample->busy_scaled = busy_frac * 100;
> @@ -1313,7 +1318,74 @@ static inline int32_t
> get_target_pstate_default(struct cpudata *cpu)
> ÂÂÂÂÂÂÂÂcpu->iowait_boost >>= 1;
>
> ÂÂÂÂÂÂÂÂpstate = cpu->pstate.turbo_pstate;
> -ÂÂÂÂÂÂÂreturn fp_toint((pstate + (pstate >> 2)) * busy_frac);
> +ÂÂÂÂÂÂÂ/* To Do: I think the above should be:
> +ÂÂÂÂÂÂÂÂ*
> +ÂÂÂÂÂÂÂÂ* if (limits.no_turbo || limits.turbo_disabled)
> +ÂÂÂÂÂÂÂÂ*ÂÂÂÂÂÂpstate = cpu->pstate.max_pstate;
> +ÂÂÂÂÂÂÂÂ* else
> +ÂÂÂÂÂÂÂÂ*ÂÂÂÂÂÂpstate = cpu->pstate.turbo_pstate;
> +ÂÂÂÂÂÂÂÂ*
> +ÂÂÂÂÂÂÂÂ* figure it out.
> +ÂÂÂÂÂÂÂÂ*
> +ÂÂÂÂÂÂÂÂ* no clamps. Pre-filter clamping was needed in past
> implementations.
> +ÂÂÂÂÂÂÂÂ* To Do: Is any pre-filter clamping needed here? */
> +
> +ÂÂÂÂÂÂÂunfiltered_target = (pstate + (pstate >> 2)) * busy_frac;
> +
> +ÂÂÂÂÂÂÂ/*
> +ÂÂÂÂÂÂÂÂ* Idle check.
> +ÂÂÂÂÂÂÂÂ* We have a deferrable timer. Very long durations can be
> +ÂÂÂÂÂÂÂÂ* either due to long idle (C0 time near 0),
> +ÂÂÂÂÂÂÂÂ* or due to short idle times that spanned jiffy boundaries
> +ÂÂÂÂÂÂÂÂ* (C0 time not near zero).
> +ÂÂÂÂÂÂÂÂ*
> +ÂÂÂÂÂÂÂÂ* To Do: As of the utilization stuff, I do not think the
> +ÂÂÂÂÂÂÂÂ* spanning jiffy boundaries thing is true anymore.
> +ÂÂÂÂÂÂÂÂ* Check, and fix the comment.
> +ÂÂÂÂÂÂÂÂ*
> +ÂÂÂÂÂÂÂÂ* The very long durations are 0.4 seconds or more.
> +ÂÂÂÂÂÂÂÂ* Either way, a very long duration will effectively flush
> +ÂÂÂÂÂÂÂÂ* the IIR filter, otherwise falling edge load response times
> +ÂÂÂÂÂÂÂÂ* can be on the order of tens of seconds, because this
> driver
> +ÂÂÂÂÂÂÂÂ* runs very rarely. Furthermore, for higher periodic loads
> that
> +ÂÂÂÂÂÂÂÂ* just so happen to not be in the C0 state on jiffy
> boundaries,
> +ÂÂÂÂÂÂÂÂ* the long ago history should be forgotten.
> +ÂÂÂÂÂÂÂÂ* For cases of durations that are a few times the set sample
> +ÂÂÂÂÂÂÂÂ* period, increase the IIR filter gain so as to weight
> +ÂÂÂÂÂÂÂÂ* the current sample more appropriately.
> +ÂÂÂÂÂÂÂÂ*
> +ÂÂÂÂÂÂÂÂ* To Do: sample_time should be forced to be accurate. For
> +ÂÂÂÂÂÂÂÂ* example if the kernel is a 250 Hz kernel, then a
> +ÂÂÂÂÂÂÂÂ* sample_rate_ms of 10 should result in a sample_time of 12.
> +ÂÂÂÂÂÂÂÂ*
> +ÂÂÂÂÂÂÂÂ* To Do: Check that the IO Boost case is not filtered too
> much.
> +ÂÂÂÂÂÂÂÂ*ÂÂÂÂÂÂÂÂIt might be that a filter by-pass is needed for the
> boost case.
> +ÂÂÂÂÂÂÂÂ*ÂÂÂÂÂÂÂÂHowever, the existing gain = f(duration) might be
> good enough.
> +ÂÂÂÂÂÂÂÂ*/
> +
> +ÂÂÂÂÂÂÂduration_ns = cpu->sample.time - cpu->last_sample_time;
> +
> +ÂÂÂÂÂÂÂscaled_gain = div_u64(int_tofp(duration_ns) *
> +ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂint_tofp(pid_params.p_gain_pct),
> int_tofp(pid_params.sample_rate_ns));
> +ÂÂÂÂÂÂÂif (scaled_gain > int_tofp(100))
> +ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂscaled_gain = int_tofp(100);
> +ÂÂÂÂÂÂÂ/*
> +ÂÂÂÂÂÂÂÂ* This code should not be required,
> +ÂÂÂÂÂÂÂÂ* but short duration times have been observed
> +ÂÂÂÂÂÂÂÂ* To Do: Check if this code is actually still needed. I
> don't think so.
> +ÂÂÂÂÂÂÂÂ*/
> +ÂÂÂÂÂÂÂif (scaled_gain < int_tofp(pid_params.p_gain_pct))
> +ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂscaled_gain = int_tofp(pid_params.p_gain_pct);
> +
> +ÂÂÂÂÂÂÂ/*
> +ÂÂÂÂÂÂÂÂ* Bandwidth limit the output. For now, re-task p_gain_pct
> for this purpose.
> +ÂÂÂÂÂÂÂÂ* Use a smple IIR (Infinite Impulse Response) filter.
> +ÂÂÂÂÂÂÂÂ*/
> +ÂÂÂÂÂÂÂcpu->sample.target = div_u64((int_tofp(100) - scaled_gain) *
> +ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂcpu->sample.target + scaled_gain *
> +ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂunfiltered_target, int_tofp(100));
> +
> +ÂÂÂÂÂÂÂreturn fp_toint(cpu->sample.target + (1 << (FRAC_BITS-1)));
> Â}
>
> Âstatic inline void intel_pstate_update_pstate(struct cpudata *cpu,
> int pstate)
> @@ -1579,6 +1651,7 @@ static void intel_pstate_stop_cpu(struct
> cpufreq_policy *policy)
> ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂreturn;
>
> ÂÂÂÂÂÂÂÂintel_pstate_set_min_pstate(cpu);
> +ÂÂÂÂÂÂÂcpu->sample.target = int_tofp(cpu->pstate.min_pstate);
> Â}
>
> Âstatic int intel_pstate_cpu_init(struct cpufreq_policy *policy)
>
> The filter introduces a trade-off between step function load response
> time
> and the tendency for the target pstate to oscillate.
>
> ...[cut]...
>
> >
> > >
> > > Several tests were done with this patch set.
> > > The patch set would not apply to kernel 4.7, but did apply fine
> > > to a 4.7+ kernel
> > > (I did as of 7a66ecf) from a few days ago.
> > >
> > > Test 1: Phoronix ffmpeg test (less time is better):
> > > Reason: Because it suffers from rotating amongst CPUs in an odd
> > > way, challenging for CPU frequency scaling drivers.
> > > This test tends to be an indicator of potential troubles with
> > > some games.
> > > Criteria: (Dirk Brandewie): Must match or better acpi_cpufreq -
> > > ondemand.
> > > With patch set: 15.8 Seconds average and 24.51 package watts.
> > > Without patch set: 11.61 Seconds average and 27.59 watts.
> > > Conclusion: Significant reduction in performance with proposed
> > > patch set.
> With the filter this become even worse at ~18 seconds.
> I used to fix this by moving the response curve to the left.
> I have not tested this:
>
> +ÂÂÂÂÂÂÂunfiltered_target = (pstate + (pstate >> 1)) * busy_frac;
>
> which moves the response curve left a little, yet. I will test it.
>
> ...[cut]...
>
> >
> > >
> > > Test 9: Doug's_specpower simulator (20% load):
> > > Time is fixed, less energy is better.
> > > Reason: During the long
> > > "[intel-pstate driver regression] processor frequency very high
> > > even if in idle"
> > > and subsequent https://bugzilla.kernel.org/show_bug.cgi?id=115771
> > > discussion / thread(s), some sort of test was needed to try to
> > > mimic what Srinivas
> > > was getting on his fancy SpecPower test platform. So far at
> > > least, this test does that.
> > > Only the 20% load case was created, because that was the biggest
> > > problem case back then.
> > > With patch set: 4 tests at an average of 7197 Joules per test,
> > > relatively high CPU frequencies.
> > > Without the patch set: 4 tests at an average of 5956 Joules per
> > > test, relatively low CPU frequencies.
> > > Conclusion: 21% energy regression with the patch set.
> > > Note: Newer processors might do better than my older i7-2600K.
> Patch set + above and IIR gain = 10%: 5670 Joules.
> Conclusion: Energy regression eliminated.
>
> Other gains:
>
> gain =ÂÂ5%: 5342 Joules; Busy MHz: 2172
> gain = 10%: 5670 Joules; Busy MHz: 2285
> gain = 20%: 6126 Joules; Busy MHz: 2560
> gain = 30%: 6426 Joules; Busy MHz: 2739
> gain = 40%: 6674 Joules; Busy MHz: 2912
> gain = 70%: 7109 Joules; Busy MHz: 3199
>
> locked at minimum pstate (reference): 4653 Joules; Busy MHz: 1600
> Performance mode (reference): 7808 Joules; Busy MHz: 3647
>
> >
> > >
> > > Test 10: measure the frequency response curve, fixed work packet
> > > method,
> > > 75 hertz work / sleep frequency (all CPU, no IOWAIT):
> > > Reason: To compare to some older data and observe overall.
> > > png graph NOT attached.
> > > Conclusions: Tends to oscillate, suggesting some sort of damping
> > > is needed.
> > > However, any filtering tends to increase the step function load
> > > rise time
> > > (see test 11 below, I think there is some wiggle room here).
> > > See also graph which has: with and without patch set; performance
> > > mode (for reference);
> > > Philippe Longepe's cpu_load method also with setpoint 40 (for
> > > reference); one of my previous
> > > attempts at a load related patch set from quite some time ago
> > > (for reference).
> As expected, the filter damps out the oscillation.
> New graphs will be sent to Rafael and Srinivas off-list.
>
> >
> > >
> > >
> > > Test 11: Look at the step function load response. From no load to
> > > 100% on one CPU (CPU load only, no IO).
> > > While there is a graph, it is not attached:
> > > Conclusion: The step function response is greatly improved
> > > (virtually one sample time max).
> > > It would probably be O.K. to slow it down a little with a filter
> > > so as to reduce the
> > > tendency to oscillate under periodic load conditions (to a point,
> > > at least. A low enough frequency will
> > > always oscillate) (see the graph for test10).
> I haven't done this test yet, but from previous work, a gain setting
> of 10 to 15% gives a
> load step function response time similar to the current PID based
> filter.
>
> The other tests gave similar results with or without the filter.
>
> ... Doug
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-pm"
> in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info atÂÂhttp://vger.kernel.org/majordomo-info.html