Re: [PATCH v2 4/7] cpufreq: report whether cpufreq supports Frequency Invariance (FI)

From: Ionela Voinescu
Date: Wed Jul 29 2020 - 10:39:17 EST


Hi,

On Monday 27 Jul 2020 at 16:02:18 (+0200), Rafael J. Wysocki wrote:
> On Wed, Jul 22, 2020 at 11:38 AM Ionela Voinescu
> <ionela.voinescu@xxxxxxx> wrote:
[..]
> > +static inline
> > +void enable_cpufreq_freq_invariance(struct cpufreq_driver *driver)
> > +{
> > + if ((driver->target || driver->target_index || driver->fast_switch) &&
> > + !driver->setpolicy) {
> > +
> > + static_branch_enable_cpuslocked(&cpufreq_set_freq_scale);
> > + pr_debug("%s: Driver %s can provide frequency invariance.",
> > + __func__, driver->name);
> > + } else
> > + pr_err("%s: Driver %s cannot provide frequency invariance.",
> > + __func__, driver->name);
>
> This doesn't follow the kernel coding style (the braces around the
> pr_err() statement are missing).
>

I'll fix this.

Also, depending on the result of the discussion below, it might be best
for this to be a warning, not an error.

> Besides, IMO on architectures where arch_set_freq_scale() is empty,
> this should be empty as well.
>

Yes, you are right, there are two aspects here:
- (1) Whether a driver *can* provide frequency invariance. IOW, whether
it implements the callbacks that result in the call to
arch_set_freq_scale().

- (2) Whether cpufreq/driver *does* provide frequency invariance. IOW,
whether the call to arch_set_freq_scale() actually results in the
setting of the scale factor.

Even when creating this v2 I was going back and forth between the options
for this:

(a) cpufreq should report whether it *can* provide frequency invariance
(as described at (1)). If we go for this, for clarity I should change

s/cpufreq_set_freq_scale/cpufreq_can_set_freq_scale_key
s/cpufreq_sets_freq_scale()/cpufreq_can_set_freq_scale()

Through this, cpufreq only reports that it calls
arch_set_freq_scale(), independent on whether that call results in a
scale factor being set. Then it would be up to the caller to ensure
this information is used with a proper definition of
arch_set_freq_scale().

(b) cpufreq should report whether it *does* provide frequency invariance

A way of doing this is to use a arch_set_freq_scale define (as done
for other arch functions, for example arch_scale_freq_tick()) and
guard this enable_cpufreq_freq_invariance() function based on that
definition.
Therefore, cpufreq_sets_freq_scale() would report whether
enable_cpufreq_freq_invariance() was successful and there is an
external definition of arch_set_freq_scale() that sets the scale
factor.


The current version is somewhat a combination of (a) and (b):
cpufreq_set_freq_scale would initially be enabled if the proper callbacks
are implemented (a), but later if the weak version of
arch_set_freq_scale() is called, we disabled it (b) (as can be seen below).

[..]
> > __weak void arch_set_freq_scale(struct cpumask *cpus, unsigned long cur_freq,
> > unsigned long max_freq)
> > {
> > + if (cpufreq_sets_freq_scale())
> > + static_branch_disable_cpuslocked(&cpufreq_set_freq_scale);
> > +
> > }
> > EXPORT_SYMBOL_GPL(arch_set_freq_scale);

I suppose a clear (a) or (b) solution might be better here.

IMO, given that (b) cannot actually guarantee that a scale factor is set
through arch_set_freq_scale() given cpufreq information about current and
maximum frequencies, for me (a) is preferred as it conveys the only
information that cpufreq can convey - the fact that it *can* set the scale
factor, not that it *does*.

Can you please confirm whether you still prefer (b), given the details
above?

Thank you,
Ionela.