On Mon, Feb 3, 2020 at 10:14 PM Maulik Shah <mkshah@xxxxxxxxxxxxxx> wrote:
rpm_msgs are copied in continuously allocated memory during write_batch.Hm, I don't get it. list_for_each_entry_safe ensures you can traverse
Update request pointer to correctly point to designated area for rpm_msgs.
While at this also add missing list_del before freeing rpm_msgs.
Signed-off-by: Maulik Shah <mkshah@xxxxxxxxxxxxxx>
---
drivers/soc/qcom/rpmh.c | 9 ++++++---
1 file changed, 6 insertions(+), 3 deletions(-)
diff --git a/drivers/soc/qcom/rpmh.c b/drivers/soc/qcom/rpmh.c
index c3d6f00..04c7805 100644
--- a/drivers/soc/qcom/rpmh.c
+++ b/drivers/soc/qcom/rpmh.c
@@ -65,7 +65,7 @@ struct cache_req {
struct batch_cache_req {
struct list_head list;
int count;
- struct rpmh_request rpm_msgs[];
+ struct rpmh_request *rpm_msgs;
};
static struct rpmh_ctrlr *get_rpmh_ctrlr(const struct device *dev)
@@ -327,8 +327,10 @@ static void invalidate_batch(struct rpmh_ctrlr *ctrlr)
unsigned long flags;
spin_lock_irqsave(&ctrlr->cache_lock, flags);
- list_for_each_entry_safe(req, tmp, &ctrlr->batch_cache, list)
+ list_for_each_entry_safe(req, tmp, &ctrlr->batch_cache, list) {
+ list_del(&req->list);
kfree(req);
+ }
INIT_LIST_HEAD(&ctrlr->batch_cache);
the list while freeing it behind you. ctrlr->batch_cache is now a
bogus list, but is re-inited with the lock held. From my reading,
there doesn't seem to be anything wrong with the current code. Can you
elaborate on the bug you found?
the continous memory allocated via below is for 3 items,
spin_unlock_irqrestore(&ctrlr->cache_lock, flags);I don't really understand what this is fixing either, can you explain?
}
@@ -377,10 +379,11 @@ int rpmh_write_batch(const struct device *dev, enum rpmh_state state,
return -ENOMEM;
req = ptr;
+ rpm_msgs = ptr + sizeof(*req);
compls = ptr + sizeof(*req) + count * sizeof(*rpm_msgs);
req->count = count;
- rpm_msgs = req->rpm_msgs;
+ req->rpm_msgs = rpm_msgs;