Hi,Ok, i will remove marking cache dirty here.
On Fri, Mar 27, 2020 at 5:04 AM Maulik Shah <mkshah@xxxxxxxxxxxxxx> wrote:
In message ID "5a5274ac-41f4-b06d-ff49-c00cef67aa7f@xxxxxxxxxxxxxx"Why can't rpmh_write()At rpmh.c, it doesn't know that rpmh-rsc.c worked on borrowed TCS to finish the request.
/ rpmh_write_async() / rpmh_write_batch() just always unconditionally
mark the cache dirty? Are there really lots of cases when those calls
are made and they do nothing?
We should not blindly mark caches dirty everytime.
which seems to be missing from the archives, you said:
yes we should trust callers not to send duplicate data...you can see some reference to it in my reply:
https://lore.kernel.org/r/CAD=FV=VPSahhK71k_D+nfL1=5QE5sKMQT=6zzyEF7+JWMcTxsg@xxxxxxxxxxxxxx/
If callers are trusted to never send duplicate data then ever call to
rpmh_write() will make a change. ...and thus the cache should always
be marked dirty, no? Also note that since rpmh_write() to "active"
also counts as a write to "wake" even those will dirty the cache.
Which case are you expecting a rpmh_write() call to not dirty the cache?
It is really hard to try to write keeping in mind these "other RSCs"...interestingly after your patch I guess now I guess tcs_invalidate()There are other RSCs which use same driver, so lets keep spinlock.
no longer needs spinlocks since it's only ever called from PM code on
the last CPU. ...if you agree, I can always do it in my cleanup
series. See:
https://lore.kernel.org/r/CAD=FV=Xp1o68HnC2-hMnffDDsi+jjgc9pNrdNuypjQZbS5K4nQ@xxxxxxxxxxxxxx
-Doug
for which there is no code upstream. IMO we should write the code
keeping in mind what is supported upstream and then when those "other
RSCs" get added we can evaluate their needs.
Sure. we can re-look all cases.
Specifically when reasoning about rpmh.c and rpmh-rsc.c I can only
look at the code that's there now and decide whether it is race free
or there are races. Back when I was analyzing the proposal to do
rpmh_flush() all the time (not from PM code) it felt like there were a
bunch of races, especially in the zero-active-TCS case. Most of the
races go away when you assume that rpmh_flush() is only ever called
from the PM code when nobody could be in the middle of an active
transfer.
If we are ever planning to call rpmh_flush() from another place we
need to re-look at all those races.
Thanks,
-Doug