On Sun, 1 Jan 2012, kosaki.motohiro@xxxxxxxxx wrote:From: KOSAKI Motohiro<kosaki.motohiro@xxxxxxxxxxxxxx>
shmctl also don't need synchrounous pagevec drain. This patch replace it with
lru_add_drain_all_async().
Signed-off-by: KOSAKI Motohiro<kosaki.motohiro@xxxxxxxxxxxxxx>
Let me answer this 2/2 first since it's easier.
I'm going to thank you for bringing this lru_add_drain_all()
to my attention, I had not noticed it; but Nak the patch itself.
The reason being, that particular lru_add_drain_all() serves no
useful purpose, so let's delete it instead of replacing it. I believe
that it serves no purpose for SHM_LOCK and no purpose for SHM_UNLOCK.
I'm dabbling in this area myself, since you so cogently pointed out that
I'd tried to add a cond_resched() to scan_mapping_unevictable_pages()
(which is a helper for SHM_UNLOCK here) while it's under spinlock.
In testing my fix for that, I find that there has been no attempt to
keep the Unevictable count accurate on SysVShm: SHM_LOCK pages get
marked unevictable lazily later as memory pressure discovers them -
which perhaps mirrors the way in which SHM_LOCK makes no attempt to
instantiate pages, unlike mlock.
Since nobody has complained about that in the two years since we've
had an Unevictable count in /proc/meminfo, I don't see any need to
add code (it would need more than just your change here; would need
more even than calling scan_mapping_unevictable_pages() at SHM_LOCK
time - though perhaps along with your 1/2 that could handle it) and
overhead to satisfy a need that nobody has.
I'll delete that lru_add_drain_all() in my patch, okay?
(But in writing this, realize I still don't quite understand why
the Unevictable count takes a second or two to get back to 0 after
SHM_UNLOCK: perhaps I've more to discover.)