On Tue, Nov 28, 2023 at 03:52:02PM +0100, David Hildenbrand wrote:
hugetlb rmap handling differs quite a lot from "ordinary" rmap code. We
don't want this hugetlb special-casing in the rmap functions, as
we're special casing the callers already. Let's simply use a separate
function for hugetlb.
We were special casing some, not all..
And IIUC the suggestion from the community is we reduce that "special
casing", instead of adding more? To be explicit below..
+++ b/mm/rmap.c
@@ -1440,13 +1440,6 @@ void page_remove_rmap(struct page *page, struct vm_area_struct *vma,
VM_BUG_ON_PAGE(compound && !PageHead(page), page);
- /* Hugetlb pages are not counted in NR_*MAPPED */
- if (unlikely(folio_test_hugetlb(folio))) {
- /* hugetlb pages are always mapped with pmds */
- atomic_dec(&folio->_entire_mapcount);
- return;
- }
Fundamentally in the ideal world when hugetlb can be merged into generic
mm, I'd imagine that as dropping here, meanwhile...
-
/* Is page being unmapped by PTE? Is this its last map to be removed? */
if (likely(!compound)) {
last = atomic_add_negative(-1, &page->_mapcount);
@@ -1804,7 +1797,10 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
dec_mm_counter(mm, mm_counter_file(&folio->page));
}
discard:
- page_remove_rmap(subpage, vma, folio_test_hugetlb(folio));
+ if (unlikely(folio_test_hugetlb(folio)))
+ hugetlb_remove_rmap(folio);
+ else
+ page_remove_rmap(subpage, vma, false);
... rather than moving hugetlb_* handlings even upper the stack, we should
hopefully be able to keep this one as a generic api.
I worry this patch is adding more hugetlb-specific paths even onto higher
call stacks so it's harder to generalize, going the other way round to what
we wanted per previous discussions.
Said that, indeed I read mostly nothing yet with the recent rmap patches,
so I may miss some important goal here.