Re: [PATCH v1 2/5] mm/rmap: introduce and use hugetlb_remove_rmap()
From: Peter Xu
Date: Tue Nov 28 2023 - 11:08:37 EST
On Tue, Nov 28, 2023 at 03:52:02PM +0100, David Hildenbrand wrote:
> hugetlb rmap handling differs quite a lot from "ordinary" rmap code. We
> don't want this hugetlb special-casing in the rmap functions, as
> we're special casing the callers already. Let's simply use a separate
> function for hugetlb.
We were special casing some, not all..
And IIUC the suggestion from the community is we reduce that "special
casing", instead of adding more? To be explicit below..
>
> Let's introduce and use hugetlb_remove_rmap() and remove the hugetlb
> code from page_remove_rmap(). This effectively removes one check on the
> small-folio path as well.
>
> While this is a cleanup, this will also make it easier to change rmap
> handling for partially-mappable folios.
>
> Note: all possible candidates that need care are page_remove_rmap() that
> pass compound=true.
>
> Signed-off-by: David Hildenbrand <david@xxxxxxxxxx>
> ---
> include/linux/rmap.h | 5 +++++
> mm/hugetlb.c | 4 ++--
> mm/rmap.c | 17 ++++++++---------
> 3 files changed, 15 insertions(+), 11 deletions(-)
>
> diff --git a/include/linux/rmap.h b/include/linux/rmap.h
> index 4c5bfeb05463..e8d1dc1d5361 100644
> --- a/include/linux/rmap.h
> +++ b/include/linux/rmap.h
> @@ -208,6 +208,11 @@ void hugetlb_add_anon_rmap(struct folio *, struct vm_area_struct *,
> void hugetlb_add_new_anon_rmap(struct folio *, struct vm_area_struct *,
> unsigned long address);
>
> +static inline void hugetlb_remove_rmap(struct folio *folio)
> +{
> + atomic_dec(&folio->_entire_mapcount);
> +}
> +
> static inline void __page_dup_rmap(struct page *page, bool compound)
> {
> if (compound) {
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 4cfa0679661e..d17bb53b19ff 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -5669,7 +5669,7 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
> make_pte_marker(PTE_MARKER_UFFD_WP),
> sz);
> hugetlb_count_sub(pages_per_huge_page(h), mm);
> - page_remove_rmap(page, vma, true);
> + hugetlb_remove_rmap(page_folio(page));
>
> spin_unlock(ptl);
> tlb_remove_page_size(tlb, page, huge_page_size(h));
> @@ -5980,7 +5980,7 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma,
>
> /* Break COW or unshare */
> huge_ptep_clear_flush(vma, haddr, ptep);
> - page_remove_rmap(&old_folio->page, vma, true);
> + hugetlb_remove_rmap(old_folio);
> hugetlb_add_new_anon_rmap(new_folio, vma, haddr);
> if (huge_pte_uffd_wp(pte))
> newpte = huge_pte_mkuffd_wp(newpte);
> diff --git a/mm/rmap.c b/mm/rmap.c
> index 112467c30b2c..5037581b79ec 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -1440,13 +1440,6 @@ void page_remove_rmap(struct page *page, struct vm_area_struct *vma,
>
> VM_BUG_ON_PAGE(compound && !PageHead(page), page);
>
> - /* Hugetlb pages are not counted in NR_*MAPPED */
> - if (unlikely(folio_test_hugetlb(folio))) {
> - /* hugetlb pages are always mapped with pmds */
> - atomic_dec(&folio->_entire_mapcount);
> - return;
> - }
Fundamentally in the ideal world when hugetlb can be merged into generic
mm, I'd imagine that as dropping here, meanwhile...
> -
> /* Is page being unmapped by PTE? Is this its last map to be removed? */
> if (likely(!compound)) {
> last = atomic_add_negative(-1, &page->_mapcount);
> @@ -1804,7 +1797,10 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
> dec_mm_counter(mm, mm_counter_file(&folio->page));
> }
> discard:
> - page_remove_rmap(subpage, vma, folio_test_hugetlb(folio));
> + if (unlikely(folio_test_hugetlb(folio)))
> + hugetlb_remove_rmap(folio);
> + else
> + page_remove_rmap(subpage, vma, false);
... rather than moving hugetlb_* handlings even upper the stack, we should
hopefully be able to keep this one as a generic api.
I worry this patch is adding more hugetlb-specific paths even onto higher
call stacks so it's harder to generalize, going the other way round to what
we wanted per previous discussions.
Said that, indeed I read mostly nothing yet with the recent rmap patches,
so I may miss some important goal here.
> if (vma->vm_flags & VM_LOCKED)
> mlock_drain_local();
> folio_put(folio);
> @@ -2157,7 +2153,10 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma,
> */
> }
>
> - page_remove_rmap(subpage, vma, folio_test_hugetlb(folio));
> + if (unlikely(folio_test_hugetlb(folio)))
> + hugetlb_remove_rmap(folio);
> + else
> + page_remove_rmap(subpage, vma, false);
> if (vma->vm_flags & VM_LOCKED)
> mlock_drain_local();
> folio_put(folio);
> --
> 2.41.0
>
>
Thanks,
--
Peter Xu