On Thu, Mar 19, 2020 at 07:19:42AM +0800, Yang Shi wrote:
When khugepaged collapses anonymous pages, the base pages would be freedHm. Why should it?
via pagevec or free_page_and_swap_cache(). But, the anonymous page may
be added back to LRU, then it might result in the below race:
CPU A CPU B
khugepaged:
unlock page
putback_lru_page
add to lru
page reclaim:
isolate this page
try_to_unmap
page_remove_rmap <-- corrupt _mapcount
It looks nothing would prevent the pages from isolating by reclaimer.
try_to_unmap() doesn't exclude parallel page unmapping. _mapcount is
protected by ptl. And this particular _mapcount pin is reachable for
reclaim as it's not part of usual page table tree. Basically
try_to_unmap() will never succeeds until we give up the _mapcount on
khugepaged side.
I don't see the issue right away.
The other problem is the page's active or unevictable flag might beSo what?
still set when freeing the page via free_page_and_swap_cache().
The putback_lru_page() would not clear those two flags if the pages areAgain, why should it? vmscan is equipped to deal with this.
released via pagevec, it sounds nothing prevents from isolating active
or unevictable pages.
However I didn't really run into these problems, just in theory by visualHm? But we do call putback_lru_page() on the way out. I do not follow.
inspection.
And, it also seems unnecessary to have the pages add back to LRU again since
they are about to be freed when reaching this point. So, clearing active
and unevictable flags, unlocking and dropping refcount from isolate
instead of calling putback_lru_page() as what page cache collapse does.
Cc: Kirill A. Shutemov <kirill.shutemov@xxxxxxxxxxxxxxx>
Cc: Hugh Dickins <hughd@xxxxxxxxxx>
Cc: Andrea Arcangeli <aarcange@xxxxxxxxxx>
Signed-off-by: Yang Shi <yang.shi@xxxxxxxxxxxxxxxxx>
---
mm/khugepaged.c | 10 +++++++++-
1 file changed, 9 insertions(+), 1 deletion(-)
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index b679908..f42fa4e 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -673,7 +673,6 @@ static void __collapse_huge_page_copy(pte_t *pte, struct page *page,
src_page = pte_page(pteval);
copy_user_highpage(page, src_page, address, vma);
VM_BUG_ON_PAGE(page_mapcount(src_page) != 1, src_page);
- release_pte_page(src_page);
/*
* ptl mostly unnecessary, but preempt has to
* be disabled to update the per-cpu stats
@@ -687,6 +686,15 @@ static void __collapse_huge_page_copy(pte_t *pte, struct page *page,
pte_clear(vma->vm_mm, address, _pte);
page_remove_rmap(src_page, false);
spin_unlock(ptl);
+
+ dec_node_page_state(src_page,
+ NR_ISOLATED_ANON + page_is_file_cache(src_page));
+ ClearPageActive(src_page);
+ ClearPageUnevictable(src_page);
+ unlock_page(src_page);
+ /* Drop refcount from isolate */
+ put_page(src_page);
+
free_page_and_swap_cache(src_page);
}
}
--
1.8.3.1