[PATCH 4/4] mm: thp: split_huge_pmd_address() comment improvement
From: Andrea Arcangeli
Date: Wed May 04 2016 - 18:59:27 EST
Comment is partly wrong, this improves it by including the case of
split_huge_pmd_address() called by try_to_unmap_one if
TTU_SPLIT_HUGE_PMD is set.
Signed-off-by: Andrea Arcangeli <aarcange@xxxxxxxxxx>
---
mm/huge_memory.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index f8f07e4..e716726 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3032,8 +3032,10 @@ void split_huge_pmd_address(struct vm_area_struct *vma, unsigned long address,
return;
/*
- * Caller holds the mmap_sem write mode, so a huge pmd cannot
- * materialize from under us.
+ * Caller holds the mmap_sem write mode or the anon_vma lock,
+ * so a huge pmd cannot materialize from under us (khugepaged
+ * holds both the mmap_sem write mode and the anon_vma lock
+ * write mode).
*/
__split_huge_pmd(vma, pmd, address, freeze);
}
I also noticed we aren't calling page_move_anon_rmap in
do_huge_pmd_wp_page when page_trans_huge_mapcount returns 1, that's a
longstanding inefficiency but it's not a bug. We're not locking the
page down in the THP COW because we don't have to deal with swapcache,
and in turn we can't overwrite the page->mapping. I think in practice
it would be safe anyway because it's an atomic write and no matter if
the rmap_walk reader sees the value before or after the write, it'll
still be able to find the pmd_trans_huge during the rmap walk. However
if page->mapping can change under the reader (i.e. rmap_walk) then the
reader should use READ_ONCE to access page->mapping (or page->mapping
should become volatile). Otherwise it'd be a bug with the C standard
where gcc could get confused in theory (in practice it would work fine
as we're mostly just dereferencing that page->mapping pointer and not
using it for switch/case or stuff like that where gcc could use an
hash). Regardless for robustness it'd be better if we take appropriate
locking and so we should take the page lock by doing a check if the
page->mapping is already pointing to the local vma->anon_vma first, if
not then we should take the page lock on the head THP and call
page_move_anon_rmap. Because this is a longstanding problem I didn't
address it yet, and it's only a missing optimization but it'd be nice
to get that covered too (considering we just worsened a bit the
optimization in presence of a COW after a pmd split and before the
physical split).