Quoting Maulik Shah (2020-11-26 02:18:18)We don't rely on 32-bits vs. 64-bits, u32 should be fine.
lockdep_assert_irqs_disabled() was added to check rpmh_flush()Maybe unsigned long is more appropriate? Do we rely on 32-bits vs.
can only be invoked when irqs are disabled, this is true for
APPS RSC as the last CPU going to deepest low power mode is
writing sleep and wake TCSes.
However for RSCs that support solver mode, drivers can invoke
rpmh_write_sleep_and_wake() to immediately write cached sleep
and wake sets to TCSes from any CPU. Conditionally check if RSC
controller supports 'HW solver' mode then do not check for irqs
disabled as such RSCs can write sleepand wake TCSes at any point.
Signed-off-by: Maulik Shah <mkshah@xxxxxxxxxxxxxx>
---
drivers/soc/qcom/rpmh-internal.h | 5 +++++
drivers/soc/qcom/rpmh-rsc.c | 3 +++
drivers/soc/qcom/rpmh.c | 26 ++++++++++++++++++++++----
3 files changed, 30 insertions(+), 4 deletions(-)
diff --git a/drivers/soc/qcom/rpmh-internal.h b/drivers/soc/qcom/rpmh-internal.h
index 79486d6..39fa3c5 100644
--- a/drivers/soc/qcom/rpmh-internal.h
+++ b/drivers/soc/qcom/rpmh-internal.h
@@ -17,6 +17,9 @@
#define MAX_TCS_NR (MAX_TCS_PER_TYPE * TCS_TYPE_NR)
#define MAX_TCS_SLOTS (MAX_CMDS_PER_TCS * MAX_TCS_PER_TYPE)
+/* CTRLR specific flags */
+#define SOLVER_PRESENT 1
+
struct rsc_drv;
/**
@@ -78,6 +81,7 @@ struct rpmh_request {
* @cache_lock: synchronize access to the cache data
* @dirty: was the cache updated since flush
* @in_solver_mode: Controller is busy in solver mode
+ * @flags: Controller specific flags
* @batch_cache: Cache sleep and wake requests sent as batch
*/
struct rpmh_ctrlr {
@@ -85,6 +89,7 @@ struct rpmh_ctrlr {
spinlock_t cache_lock;
bool dirty;
bool in_solver_mode;
+ u32 flags;
64-bits?
It may for now, but going forward we may have different flags to indicate various functions supported by RSC.
struct list_head batch_cache;It looks like this could be tested by checking for
};
diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c
index ffb4ca7..4caaddf 100644
--- a/drivers/soc/qcom/rpmh-rsc.c
+++ b/drivers/soc/qcom/rpmh-rsc.c
@@ -1031,6 +1031,9 @@ static int rpmh_rsc_probe(struct platform_device *pdev)
if (!solver_config) {
drv->rsc_pm.notifier_call = rpmh_rsc_cpu_pm_callback;
cpu_pm_register_notifier(&drv->rsc_pm);
+ drv->client.flags &= ~SOLVER_PRESENT;
+ } else {
+ drv->client.flags |= SOLVER_PRESENT;
drv->rsc_pm.notifier_call being non-NULL?
Hmm, we can. Patch 2 of this series already added different function which will be called where solver is present.
}Can we have a different function that is called for the case where
/* Enable the active TCS to send requests immediately */
diff --git a/drivers/soc/qcom/rpmh.c b/drivers/soc/qcom/rpmh.c
index 725b8f0..604d511 100644
--- a/drivers/soc/qcom/rpmh.c
+++ b/drivers/soc/qcom/rpmh.c
@@ -83,6 +83,9 @@ static int check_ctrlr_state(struct rpmh_ctrlr *ctrlr, enum rpmh_state state)
if (state != RPMH_ACTIVE_ONLY_STATE)
return ret;
+ if (!(ctrlr->flags & SOLVER_PRESENT))
+ return ret;
+
/* Do not allow sending active votes when in solver mode */
spin_lock(&ctrlr->cache_lock);
if (ctrlr->in_solver_mode)
@@ -468,12 +471,24 @@ int rpmh_flush(struct rpmh_ctrlr *ctrlr)
struct cache_req *p;
int ret = 0;
- lockdep_assert_irqs_disabled();
+ /*
+ * For RSC that don't have solver mode,
+ * rpmh_flush() is only called when we think we're running
+ * on the last CPU with irqs_disabled.
+ *
+ * For RSC that have solver mode,
+ * rpmh_flush() can be invoked with irqs enabled by any CPU.
+ *
+ * Conditionally check for irqs_disabled only when solver mode
+ * is not available.
+ */
+
+ if (!(ctrlr->flags & SOLVER_PRESENT))
+ lockdep_assert_irqs_disabled();
solver mode is present and where solver mode isn't present? It would be
good to clearly show that rpmh_flush() thinks it is being called from
the last CPU vs. from some other random place because the code is
assuming solver vs. non-solver enabled state. It would be clearer from
the call site too.
/*
- * Currently rpmh_flush() is only called when we think we're running
- * on the last processor. If the lock is busy it means another
- * processor is up and it's better to abort than spin.
+ * If the lock is busy it means another transaction is on going,
+ * in such case it's better to abort than spin.
*/
if (!spin_trylock(&ctrlr->cache_lock))
return -EBUSY;