Re: [PATCH v16 4/6] soc: qcom: rpmh: Invoke rpmh_flush() for dirty caches

From: Maulik Shah
Date: Sun Apr 12 2020 - 09:51:45 EST


Hi,

On 4/9/2020 1:46 PM, Stephen Boyd wrote:
Quoting Maulik Shah (2020-04-08 00:08:48)
Hi,

On 4/8/2020 8:20 AM, Stephen Boyd wrote:
Quoting Maulik Shah (2020-04-05 23:32:19)
for CPU PM notification. They may be in autonomous mode executing
low power mode and do not require rpmh_flush() to happen from CPU
PM notification.

Signed-off-by: Maulik Shah <mkshah@xxxxxxxxxxxxxx>
Reviewed-by: Douglas Anderson <dianders@xxxxxxxxxxxx>
---
drivers/soc/qcom/rpmh-internal.h | 25 +++++---
drivers/soc/qcom/rpmh-rsc.c | 123 +++++++++++++++++++++++++++++++++++----
drivers/soc/qcom/rpmh.c | 26 +++------
3 files changed, 137 insertions(+), 37 deletions(-)

diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c
index b718221..fbe1f3e 100644
--- a/drivers/soc/qcom/rpmh-rsc.c
+++ b/drivers/soc/qcom/rpmh-rsc.c
@@ -6,6 +6,7 @@
[...]
+
+static int rpmh_rsc_cpu_pm_callback(struct notifier_block *nfb,
+ unsigned long action, void *v)
+{
+ struct rsc_drv *drv = container_of(nfb, struct rsc_drv, rsc_pm);
+ int ret = NOTIFY_OK;
+
+ spin_lock(&drv->pm_lock);
+
+ switch (action) {
+ case CPU_PM_ENTER:
I thought CPU_PM notifiers weren't supposed to be used anymore? Or at
least, the genpd work that has gone on for cpuidle could be used here in
place of CPU_PM notifiers?
genpd was used in v3 and v4 of this series, where from pd's .power_off
function, rpmh_flush() was invoked.

genpd can be useful if target firmware supports PSCI's OSI mode, while
sc7180 is non-OSI target.

The current approch (using cpu pm notification) can be used for both OSI
and non-OSI targets to invoke rpmh_flush() when last cpu goes to power down.
Ok. Doug and I talked today and I re-read the earlier series and I think
Sudeep was suggesting that if we're doing last man down activities here
then we're better off using OSI vs. PC mode. But I can only assume
that's because the concern is something here requires software's help
for last man down activities like lowering a CPU voltage setting or
turning off some power switch to a hardware block through some i2c
message. The way I understand it the last man down activities here are
just setting up the sleep and wake TCS FIFOs to "do the right thing"
when the last CPU actually goes down and the first CPU wakes up by
running through the pile of "instructions" that we program into the
FIFOs.

The execution of those instructions is all done in hardware so any
aggregation or coordination between CPUs is not really important here.
All that matters is that we set up the sleep and wake TCS FIFOs properly
so that _if_ the whole CPU subsystem goes to sleep we're going to let
the hardware turn off the right stuff and lower voltages, etc. and
vice-versa for wake. If we didn't have to share the TCS FIFOs with
active mode control then we could just tweak the sleep and wake TCS
buckets at runtime and let the hardware state of the CPUs decide to
trigger them at the right time.
Correct.
Unfortunately, we don't have that luxury
and we're stuck repurposing the sleep TCS FIFO to control things like
regulator voltages when the CPU is awake. Yuck!
RSC is having TCS HW and it is currently divided in ACTIVE/ SLEEP/ WAKE TCSes configuration.
The ACTICE TCS HW and SLEEP + WAKE TCS HW usecases are mutually exclusive, in the sense that when
ACTIVE TCS HW is in-use the SLEEP + WAKE TCSes are not (since CPU is already out of low power mode
doing active only transfer, firmware can not trigger deepest low power modes where SLEEP and WAKE TCses is used)

Similarly when SLEEP + WAKE TCSes HW are in-use, the ACTIVE TCS HW is not (since none of the CPU is in Linux
to send active message) With above, some of the RSCs already don't have dedicated ACTIVE TCS HW, and when we
want to send active-only message we borrow one TCS from WAKE TCS pool, configure it for ACTIVE use (like enable
completion IRQ for the TCS) and once done re-configure it to use as WAKE TCS only.

So with reduced HW (removing ACTIVE TCSes), the same functionality is achieved.
The rpmh driver is designed support such scenarios.

Thanks,
Maulik

And so this isn't actually any different
than what was proposed originally to use genpd for this?

I guess this answer to this is yes. Which is fine. CPU PM notifiers are
still used by various drivers to do things like save/restore state of
devices that lose state when the CPUs power down. The use of genpd is
helpful for OSI mode because it can describe how/when big and little
clusters are powered off by putting them in different genpds. For
counting the last CPU to turn off it seems simpler to just register for
CPU PM notifiers and not care about genpd logic and nesting clusters,
etc. I'm happy to see this not be a blocker.

--
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, hosted by The Linux Foundation