Re: [PATCH 2/9] mm, oom: introduce oom reaper
From: Andrew Morton
Date: Tue Mar 22 2016 - 18:45:42 EST
On Tue, 22 Mar 2016 12:00:19 +0100 Michal Hocko <mhocko@xxxxxxxxxx> wrote:
> This is based on the idea from Mel Gorman discussed during LSFMM 2015 and
> independently brought up by Oleg Nesterov.
What happened to oom-reaper-handle-mlocked-pages.patch? I have it in
-mm but I don't see it in this v6.
From: Michal Hocko <mhocko@xxxxxxxx>
Subject: oom reaper: handle mlocked pages
__oom_reap_vmas current skips over all mlocked vmas because they need a
special treatment before they are unmapped. This is primarily done for
simplicity. There is no reason to skip over them and reduce the amount of
reclaimed memory. This is safe from the semantic point of view because
try_to_unmap_one during rmap walk would keep tell the reclaim to cull the
page back and mlock it again.
munlock_vma_pages_all is also safe to be called from the oom reaper
context because it doesn't sit on any locks but mmap_sem (for read).
Signed-off-by: Michal Hocko <mhocko@xxxxxxxx>
Cc: Andrea Argangeli <andrea@xxxxxxxxxx>
Acked-by: David Rientjes <rientjes@xxxxxxxxxx>
Cc: Hugh Dickins <hughd@xxxxxxxxxx>
Cc: Johannes Weiner <hannes@xxxxxxxxxxx>
Cc: Mel Gorman <mgorman@xxxxxxx>
Cc: Oleg Nesterov <oleg@xxxxxxxxxx>
Cc: Rik van Riel <riel@xxxxxxxxxx>
Cc: Tetsuo Handa <penguin-kernel@xxxxxxxxxxxxxxxxxxx>
Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
---
mm/oom_kill.c | 12 ++++--------
1 file changed, 4 insertions(+), 8 deletions(-)
diff -puN mm/oom_kill.c~oom-reaper-handle-mlocked-pages mm/oom_kill.c
--- a/mm/oom_kill.c~oom-reaper-handle-mlocked-pages
+++ a/mm/oom_kill.c
@@ -442,13 +442,6 @@ static bool __oom_reap_vmas(struct mm_st
continue;
/*
- * mlocked VMAs require explicit munlocking before unmap.
- * Let's keep it simple here and skip such VMAs.
- */
- if (vma->vm_flags & VM_LOCKED)
- continue;
-
- /*
* Only anonymous pages have a good chance to be dropped
* without additional steps which we cannot afford as we
* are OOM already.
@@ -458,9 +451,12 @@ static bool __oom_reap_vmas(struct mm_st
* we do not want to block exit_mmap by keeping mm ref
* count elevated without a good reason.
*/
- if (vma_is_anonymous(vma) || !(vma->vm_flags & VM_SHARED))
+ if (vma_is_anonymous(vma) || !(vma->vm_flags & VM_SHARED)) {
+ if (vma->vm_flags & VM_LOCKED)
+ munlock_vma_pages_all(vma);
unmap_page_range(&tlb, vma, vma->vm_start, vma->vm_end,
&details);
+ }
}
tlb_finish_mmu(&tlb, 0, -1);
up_read(&mm->mmap_sem);
_