Re: [PATCH V2 5/5] PM / Domains: Propagate performance state updates
From: Ulf Hansson
Date: Fri Nov 30 2018 - 05:18:56 EST
On Fri, 30 Nov 2018 at 10:59, Viresh Kumar <viresh.kumar@xxxxxxxxxx> wrote:
>
> On 30-11-18, 10:44, Ulf Hansson wrote:
> > On Mon, 26 Nov 2018 at 09:10, Viresh Kumar <viresh.kumar@xxxxxxxxxx> wrote:
> > >
> > > This commit updates genpd core to start propagating performance state
> > > updates to master domains that have their set_performance_state()
> > > callback set.
> > >
> > > A genpd handles two type of performance states now. The first one is the
> > > performance state requirement put on the genpd by the devices and
> > > sub-domains under it, which is already represented by
> > > genpd->performance_state.
> >
> > This isn't correct.
> >
> > Currently genpd->performance_state reflects the aggregation of the
> > state requirements from each device that is attached to it. Not from
> > subdomains.
>
> Okay, will improve the changelog.
>
> > > The second type, introduced in this commit, is
> > > the performance state requirement(s) put by the genpd on its master
> > > domain(s). There is a separate value required for each master that the
> > > genpd has and so a new field is added to the struct gpd_link
> > > (link->performance_state), which represents the link between a genpd and
> > > its master. The struct gpd_link also got another field
> > > prev_performance_state, which is used by genpd core as a temporary
> > > variable during transitions.
> > >
> > > We need to propagate setting performance state while powering-on a genpd
> > > as well, as we ignore performance state requirements from sub-domains
> > > which are powered-off. For this reason _genpd_power_on() also received
> > > the additional parameter, depth, which is used for hierarchical locking
> > > within genpd.
> > >
> > > Signed-off-by: Viresh Kumar <viresh.kumar@xxxxxxxxxx>
> > > ---
> > > drivers/base/power/domain.c | 113 ++++++++++++++++++++++++++++++------
> > > include/linux/pm_domain.h | 4 ++
> > > 2 files changed, 98 insertions(+), 19 deletions(-)
> > >
> > > diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c
> > > index d6b389a9f678..206e263abe90 100644
> > > --- a/drivers/base/power/domain.c
> > > +++ b/drivers/base/power/domain.c
> > > @@ -239,24 +239,88 @@ static void genpd_update_accounting(struct generic_pm_domain *genpd)
> > > static inline void genpd_update_accounting(struct generic_pm_domain *genpd) {}
> > > #endif
> > >
> > > +static int _genpd_reeval_performance_state(struct generic_pm_domain *genpd,
> > > + unsigned int state, int depth);
> > > +
> >
> > I don't like forward declarations like these, as in most cases it
> > means that the code can be re-worked and improved.
> >
> > However, because of the recursiveness code in genpd, I understand that
> > it may be needed for some cases. Anyway, please have look and see if
> > you can rid of it.
>
> Will see if I can do it any better, though I really doubt it :)
>
> > > static int _genpd_set_performance_state(struct generic_pm_domain *genpd,
> > > - unsigned int state)
> > > + unsigned int state, int depth)
>
>
> > > -static int _genpd_power_on(struct generic_pm_domain *genpd, bool timed)
> > > +static int
> > > +_genpd_power_on(struct generic_pm_domain *genpd, bool timed, int depth)
> >
> > Cosmetics, but please move the new parameter below instead of moving
> > "static int" above.
> >
> > > {
> > > unsigned int state_idx = genpd->state_idx;
> > > ktime_t time_start;
> > > @@ -368,7 +442,8 @@ static int _genpd_power_on(struct generic_pm_domain *genpd, bool timed)
> > > elapsed_ns = ktime_to_ns(ktime_sub(ktime_get(), time_start));
> > >
> > > if (unlikely(genpd->set_performance_state)) {
> > > - ret = genpd->set_performance_state(genpd, genpd->performance_state);
> > > + ret = _genpd_set_performance_state(genpd,
> > > + genpd->performance_state, depth);
> >
> > This is going to be a problem for genpd_sync_power_on() as in some
> > cases we can't allow to use locks.
>
> So we pass the use_lock parameter to this routine as well ?
Maybe.
>
> > Moreover I am wondering how this plays with genpd_power_on(), as when
> > it calls _genpd_power_on() the first time, that is done by holding the
> > top-level master's lock - and all locks in that hierarchy. In other
> > words we are always powering on the top most master first, then step
> > by step we move down in the hierarchy to power on the genpds.
>
> Sure, but the ordering of locks is always subdomain first and then master.
> Considering the case of Qcom, we have two domains Cx (sub-domain) and Mx (master).
>
> On first genpd_power_on(Cx) call, we will first call genpd_power_on(Mx) which
> will just power it on as none of its master will have perf-state support. We
> then call _genpd_power_on(Cx) which will also not do anything with Mx as its own
> (Cx's) pstate would be 0 at that time. But even if it had a valid value, it will
> propagate just fine with all proper locking in place.
Can you explain that, it's not super easy to follow the flow.
So what will happen if Cx has a value that needs to be propagated?
What locks will be taken, and in what order?
Following, what if we had a Bx domain, being the subdomain of Cx, and
it too had a value that needs to be propagated. It sounds like we will
do the propagation one time per level. Is that really necessary,
couldn't we just do it once, after the power on sequence have been
completed?
Kind regards
Uffe