Prepare khugepaged to see compound pages mapped with pte. For now we
won't collapse the pmd table with such pte.
khugepaged is subject for future rework wrt new refcounting.
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@xxxxxxxxxxxxxxx>
Tested-by: Sasha Levin <sasha.levin@xxxxxxxxxx>
---
mm/huge_memory.c | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index fa3d4f78b716..ffc30e4462c1 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2653,6 +2653,11 @@ static int khugepaged_scan_pmd(struct mm_struct *mm,
page = vm_normal_page(vma, _address, pteval);
if (unlikely(!page))
goto out_unmap;
+
+ /* TODO: teach khugepaged to collapse THP mapped with pte */
+ if (PageCompound(page))
+ goto out_unmap;
+
/*
* Record which node the original page is from and save this
* information to khugepaged_node_load[].
@@ -2663,7 +2668,6 @@ static int khugepaged_scan_pmd(struct mm_struct *mm,
if (khugepaged_scan_abort(node))
goto out_unmap;
khugepaged_node_load[node]++;
- VM_BUG_ON_PAGE(PageCompound(page), page);
if (!PageLRU(page) || PageLocked(page) || !PageAnon(page))
goto out_unmap;
/*