On Tue, Feb 4, 2020 at 9:12 PM Maulik Shah <mkshah@xxxxxxxxxxxxxx> wrote:
I don't think that's true. the "_safe" part of
On 2/5/2020 6:01 AM, Evan Green wrote:
On Mon, Feb 3, 2020 at 10:14 PM Maulik Shah <mkshah@xxxxxxxxxxxxxx> wrote:Hi Evan,
rpm_msgs are copied in continuously allocated memory during write_batch.Hm, I don't get it. list_for_each_entry_safe ensures you can traverse
Update request pointer to correctly point to designated area for rpm_msgs.
While at this also add missing list_del before freeing rpm_msgs.
Signed-off-by: Maulik Shah <mkshah@xxxxxxxxxxxxxx>
---
drivers/soc/qcom/rpmh.c | 9 ++++++---
1 file changed, 6 insertions(+), 3 deletions(-)
diff --git a/drivers/soc/qcom/rpmh.c b/drivers/soc/qcom/rpmh.c
index c3d6f00..04c7805 100644
--- a/drivers/soc/qcom/rpmh.c
+++ b/drivers/soc/qcom/rpmh.c
@@ -65,7 +65,7 @@ struct cache_req {
struct batch_cache_req {
struct list_head list;
int count;
- struct rpmh_request rpm_msgs[];
+ struct rpmh_request *rpm_msgs;
};
static struct rpmh_ctrlr *get_rpmh_ctrlr(const struct device *dev)
@@ -327,8 +327,10 @@ static void invalidate_batch(struct rpmh_ctrlr *ctrlr)
unsigned long flags;
spin_lock_irqsave(&ctrlr->cache_lock, flags);
- list_for_each_entry_safe(req, tmp, &ctrlr->batch_cache, list)
+ list_for_each_entry_safe(req, tmp, &ctrlr->batch_cache, list) {
+ list_del(&req->list);
kfree(req);
+ }
INIT_LIST_HEAD(&ctrlr->batch_cache);
the list while freeing it behind you. ctrlr->batch_cache is now a
bogus list, but is re-inited with the lock held. From my reading,
there doesn't seem to be anything wrong with the current code. Can you
elaborate on the bug you found?
when we don't do list_del, there might be access to already freed memory.
Even after current item free via kfree(req), without list_del, the next
and prev item's pointer are still pointing to this freed region.
it seem best to call list_del to ensure that before freeing this area,
no other item in list refer to this.
list_for_each_entry_safe ensures that we don't touch the ->next member
of any node after freeing it. So I don't think there's any case where
we could touch freed memory. The list_del still seems like needless
code to me.
Thanks Evan,I don't follow that either. The empty array declaration (or thethe continous memory allocated via below is for 3 items,spin_unlock_irqrestore(&ctrlr->cache_lock, flags);I don't really understand what this is fixing either, can you explain?
}
@@ -377,10 +379,11 @@ int rpmh_write_batch(const struct device *dev, enum rpmh_state state,
return -ENOMEM;
req = ptr;
+ rpm_msgs = ptr + sizeof(*req);
compls = ptr + sizeof(*req) + count * sizeof(*rpm_msgs);
req->count = count;
- rpm_msgs = req->rpm_msgs;
+ req->rpm_msgs = rpm_msgs;
ptr = kzalloc(sizeof(*req) + count * (sizeof(req->rpm_msgs[0]) +
sizeof(*compls)), GFP_ATOMIC);
1. batch_cache_req, followed by
2. total count of rpmh_request, followed by
3. total count of compls
current code starts using (3) compls from proper offset in memory
compls = ptr + sizeof(*req) + count * sizeof(*rpm_msgs);
however for (2) rpmh_request it does
rpm_msgs = req->rpm_msgs;
because of this it starts 8 byte before its designated area and overlaps
with (1) batch_cache_req struct's last entry.
this patch corrects it via below to ensure rpmh_request uses correct
start address in memory.
rpm_msgs = ptr + sizeof(*req);
GCC-specific version of it would be "struct rpmh_request
rpm_msgs[0];") is a flexible array member, meaning the member itself
doesn't take up any space in the struct. So, for instance, it holds
true that &(req->rpm_msgs[0]) == (req + 1). By my reading the existing
code is correct, and your patch just adds a needless pointer
indirection. Check out this wikipedia entry:
https://en.wikipedia.org/wiki/Flexible_array_member