Re: [PATCH 1/8] cpufreq: allow drivers to flag custom support for freq invariance
From: Ionela Voinescu
Date: Thu Jul 02 2020 - 07:44:30 EST
Hi,
On Thursday 02 Jul 2020 at 08:28:18 (+0530), Viresh Kumar wrote:
> On 01-07-20, 18:05, Rafael J. Wysocki wrote:
> > On Wed, Jul 1, 2020 at 3:33 PM Ionela Voinescu <ionela.voinescu@xxxxxxx> wrote:
> > > On Wednesday 01 Jul 2020 at 16:16:17 (+0530), Viresh Kumar wrote:
> > > > I will rather suggest CPUFREQ_SKIP_SET_FREQ_SCALE as the name and
> > > > functionality. We need to give drivers a choice if they do not want
> > > > the core to do it on their behalf, because they are doing it on their
> > > > own or they don't want to do it.
> >
> > Well, this would go backwards to me, as we seem to be designing an
> > opt-out flag for something that's not even implemented already.
> >
> > I would go for an opt-in instead. That would be much cleaner and less
> > prone to regressions IMO.
>
> That's fine, I just wanted an option for drivers to opt-out of this
> thing. I felt okay with the opt-out flag as this should be enabled for
> most of the drivers and so enabling by default looked okay as well.
>
> > > In this case we would not be able to tell if cpufreq (driver or core)
> > > can provide the frequency scale factor, so we would not be able to tell
> > > if the system is really frequency invariant; CPUFREQ_SKIP_SET_FREQ_SCALE
>
> That is easy to fix. Let the drivers call
> enable_cpufreq_freq_invariance() and set the flag.
>
Right! I suppose part of "the dream" :) was for drivers to be ignorant of
frequency invariance, and for the core to figure out if it has proper
information to potentially* pass to the scheduler.
*potentially = depending on the arch_set_freq_scale() definition.
> > > would be set if either:
> > > - the driver calls arch_set_freq_scale() on its own
> > > - the driver does not want arch_set_freq_scale() to be called.
> > >
> > > So at the core level we would not be able to distinguish between the
> > > two, and return whether cpufreq-based invariance is supported.
> > >
> > > I don't really see a reason why a driver would not want to set the
> > > frequency scale factor
>
> A simple case where the driver doesn't have any idea what the real
> freq
For me, this would have been filtered by either the type of callback
they use (target_index(), fast_switch() and even target() would offer
some close to accurate indication of the current frequency, while
setpolicy() it obviously targets a range of frequencies) or by the
definition of arch_set_freq_scale().
> ..of the CPU is and it doesn't have counters to guess it as well.
>
> There can be other reasons which we aren't able to imagine at this
> point of time.
>
But I understand both the points you and Rafael raised so it's obvious
that a 'opt in' flag would be the better option.
Thank you both,
Ionela.
> --
> viresh