[PATCH v2] mm: mlock: remove lru_add_drain_all()

From: Shakeel Butt
Date: Thu Oct 19 2017 - 18:25:27 EST


lru_add_drain_all() is not required by mlock() and it will drain
everything that has been cached at the time mlock is called. And
that is not really related to the memory which will be faulted in
(and cached) and mlocked by the syscall itself.

Without lru_add_drain_all() the mlocked pages can remain on pagevecs
and be moved to evictable LRUs. However they will eventually be moved
back to unevictable LRU by reclaim. So, we can safely remove
lru_add_drain_all() from mlock syscall. Also there is no need for
local lru_add_drain() as it will be called deep inside __mm_populate()
(in follow_page_pte()).

On larger machines the overhead of lru_add_drain_all() in mlock() can
be significant when mlocking data already in memory. We have observed
high latency in mlock() due to lru_add_drain_all() when the users
were mlocking in memory tmpfs files.

Signed-off-by: Shakeel Butt <shakeelb@xxxxxxxxxx>
---
Changelog since v1:
- updated commit message

mm/mlock.c | 5 -----
1 file changed, 5 deletions(-)

diff --git a/mm/mlock.c b/mm/mlock.c
index dfc6f1912176..3ceb2935d1e0 100644
--- a/mm/mlock.c
+++ b/mm/mlock.c
@@ -669,8 +669,6 @@ static __must_check int do_mlock(unsigned long start, size_t len, vm_flags_t fla
if (!can_do_mlock())
return -EPERM;

- lru_add_drain_all(); /* flush pagevec */
-
len = PAGE_ALIGN(len + (offset_in_page(start)));
start &= PAGE_MASK;

@@ -797,9 +795,6 @@ SYSCALL_DEFINE1(mlockall, int, flags)
if (!can_do_mlock())
return -EPERM;

- if (flags & MCL_CURRENT)
- lru_add_drain_all(); /* flush pagevec */
-
lock_limit = rlimit(RLIMIT_MEMLOCK);
lock_limit >>= PAGE_SHIFT;

--
2.15.0.rc0.271.g36b669edcc-goog