Re: [PATCH v14 6/6] soc: qcom: rpmh-rsc: Allow using free WAKE TCS for active request

From: Maulik Shah
Date: Tue Mar 31 2020 - 04:57:53 EST


Hi,

On 3/28/2020 12:12 AM, Doug Anderson wrote:
Hi,

On Fri, Mar 27, 2020 at 5:04 AM Maulik Shah <mkshah@xxxxxxxxxxxxxx> wrote:
Why can't rpmh_write()
/ rpmh_write_async() / rpmh_write_batch() just always unconditionally
mark the cache dirty? Are there really lots of cases when those calls
are made and they do nothing?
At rpmh.c, it doesn't know that rpmh-rsc.c worked on borrowed TCS to finish the request.

We should not blindly mark caches dirty everytime.
In message ID "5a5274ac-41f4-b06d-ff49-c00cef67aa7f@xxxxxxxxxxxxxx"
which seems to be missing from the archives, you said:

yes we should trust callers not to send duplicate data
...you can see some reference to it in my reply:

https://lore.kernel.org/r/CAD=FV=VPSahhK71k_D+nfL1=5QE5sKMQT=6zzyEF7+JWMcTxsg@xxxxxxxxxxxxxx/

If callers are trusted to never send duplicate data then ever call to
rpmh_write() will make a change. ...and thus the cache should always
be marked dirty, no? Also note that since rpmh_write() to "active"
also counts as a write to "wake" even those will dirty the cache.

Which case are you expecting a rpmh_write() call to not dirty the cache?
Ok, i will remove marking cache dirty here.


...interestingly after your patch I guess now I guess tcs_invalidate()
no longer needs spinlocks since it's only ever called from PM code on
the last CPU. ...if you agree, I can always do it in my cleanup
series. See:

https://lore.kernel.org/r/CAD=FV=Xp1o68HnC2-hMnffDDsi+jjgc9pNrdNuypjQZbS5K4nQ@xxxxxxxxxxxxxx

-Doug
There are other RSCs which use same driver, so lets keep spinlock.
It is really hard to try to write keeping in mind these "other RSCs"
for which there is no code upstream. IMO we should write the code
keeping in mind what is supported upstream and then when those "other
RSCs" get added we can evaluate their needs.

Agree but i would insist not remove locks in your cleanup/documentation series which are already there.

These will be again need to be added.

The locks don't cause any issue being there since only last cpu is invoking rpmh_flush() at present.

Adding support for other RSCs is in my to do list, and when that is being done we can re-evaluate and

remove if not required.


Specifically when reasoning about rpmh.c and rpmh-rsc.c I can only
look at the code that's there now and decide whether it is race free
or there are races. Back when I was analyzing the proposal to do
rpmh_flush() all the time (not from PM code) it felt like there were a
bunch of races, especially in the zero-active-TCS case. Most of the
races go away when you assume that rpmh_flush() is only ever called
from the PM code when nobody could be in the middle of an active
transfer.

If we are ever planning to call rpmh_flush() from another place we
need to re-look at all those races.
Sure. we can re-look all cases.


-Doug
Thanks,
Maulik

--
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, hosted by The Linux Foundation