Re: [PATCH] mm/hugetlb: Convert &folio->page to folio_page(folio, 0)
From: Fan Ni
Date: Wed Apr 09 2025 - 18:08:20 EST
On Wed, Apr 09, 2025 at 04:15:30AM +0100, Matthew Wilcox wrote:
> On Tue, Apr 08, 2025 at 05:49:10PM -0700, nifan.cxl@xxxxxxxxx wrote:
> > From: Fan Ni <fan.ni@xxxxxxxxxxx>
> >
> > Convert the use of &folio->page to folio_page(folio, 0) where struct
> > filio fits in. This is part of the efforts to move some fields out of
> > struct page to reduce its size.
>
> Thanks for sending the patch. You've mixed together quite a few things;
> I'd suggest focusing on one API at a time.
...
> > */
> > - free_page_and_swap_cache(&new_folio->page);
> > + free_page_and_swap_cache(folio_page(new_folio, 0));
> > }
>
> free_page_and_swap_cache() should be converted to be
> free_folio_and_swap_cache().
While looking into this function, I see it is defined as a macro in
swap.h as below,
#define free_page_and_swap_cache(page) \
put_page(page)
While checking put_page() in include/linux.mm.h, it always converts page
to folio, is there a reason why we do not take "folio" directly?
static inline void put_page(struct page *page)
{
struct folio *folio = page_folio(page);
if (folio_test_slab(folio))
return;
folio_put(folio);
}
Fan
>
>
> >
> > - return __folio_split(folio, new_order, &folio->page, page, list, true);
> > + return __folio_split(folio, new_order, folio_page(folio, 0), page,
> > + list, true);
> > }
>
> Probably right.
>
> > {
> > - return __folio_split(folio, new_order, split_at, &folio->page, list,
> > - false);
> > + return __folio_split(folio, new_order, split_at, folio_page(folio, 0),
> > + list, false);
> > }
>
> Ditto.
>
> >
> > - return split_huge_page_to_list_to_order(&folio->page, list, ret);
> > + return split_huge_page_to_list_to_order(folio_page(folio, 0), list,
> > + ret);
> > }
>
> Ditto.
>
> >
> > - if (is_migrate_isolate_page(&folio->page))
> > + if (is_migrate_isolate_page(folio_page(folio, 0)))
> > continue;
>
> I think we need an is_migrate_isolate_folio() instead of this.
>
> > if (folio_test_anon(folio))
> > - __ClearPageAnonExclusive(&folio->page);
> > + __ClearPageAnonExclusive(folio_page(folio, 0));
> > folio->mapping = NULL;
>
> ... David.
>
> >
> > - split_page_owner(&folio->page, huge_page_order(src), huge_page_order(dst));
> > + split_page_owner(folio_page(folio, 0), huge_page_order(src),
> > + huge_page_order(dst));
>
> See earlier.
>
> > if (folio_mapcount(old_folio) == 1 && folio_test_anon(old_folio)) {
> > - if (!PageAnonExclusive(&old_folio->page)) {
> > + if (!PageAnonExclusive(folio_page(old_folio, 0))) {
> > folio_move_anon_rmap(old_folio, vma);
> > - SetPageAnonExclusive(&old_folio->page);
> > + SetPageAnonExclusive(folio_page(old_folio, 0));
> > }
>
> David.
>
> > }
> > VM_BUG_ON_PAGE(folio_test_anon(old_folio) &&
> > - PageAnonExclusive(&old_folio->page), &old_folio->page);
> > + PageAnonExclusive(folio_page(old_folio, 0)),
> > + folio_page(old_folio, 0));
>
> The PageAnonExclusive() part of this change is for David to comment on,
> but this should be a VM_BUG_ON_FOLIO() instead of calling folio_page()
> to keep this a VM_BUG_ON_PAGE().
>
> >
> > - unmap_ref_private(mm, vma, &old_folio->page,
> > - vmf->address);
> > + unmap_ref_private(mm, vma, folio_page(old_folio, 0),
> > + vmf->address);
>
> unmap_ref_private() only has one caller (this one), so make that take a
> folio. This is a whole series, all by itself.
>
> > hugetlb_cgroup_migrate(old_folio, new_folio);
> > - set_page_owner_migrate_reason(&new_folio->page, reason);
> > + set_page_owner_migrate_reason(folio_page(new_folio, 0), reason);
> >
>
> See earlier about page owner being folio or page based.
>
> > int ret;
> > - unsigned long vmemmap_start = (unsigned long)&folio->page, vmemmap_end;
> > + unsigned long vmemmap_start = (unsigned long)folio_page(folio, 0), vmemmap_end;
> > unsigned long vmemmap_reuse;
>
> Probably right.
>
> > int ret = 0;
> > - unsigned long vmemmap_start = (unsigned long)&folio->page, vmemmap_end;
> > + unsigned long vmemmap_start = (unsigned long)folio_page(folio, 0), vmemmap_end;
> > unsigned long vmemmap_reuse;
>
> Ditto.
>
> > - unsigned long vmemmap_start = (unsigned long)&folio->page, vmemmap_end;
> > + unsigned long vmemmap_start = (unsigned long)folio_page(folio, 0), vmemmap_end;
> > unsigned long vmemmap_reuse;
>
> Ditto.
>
> > */
> > - spfn = (unsigned long)&folio->page;
> > + spfn = (unsigned long)folio_page(folio, 0);
>
> Ditto.
>
> > register_page_bootmem_memmap(pfn_to_section_nr(spfn),
> > - &folio->page,
> > - HUGETLB_VMEMMAP_RESERVE_SIZE);
> > + folio_page(folio, 0),
> > + HUGETLB_VMEMMAP_RESERVE_SIZE);
>
> Don't change the indentation, but looks right.
>
> > result = SCAN_SUCCEED;
> > - trace_mm_collapse_huge_page_isolate(&folio->page, none_or_zero,
> > - referenced, writable, result);
> > + trace_mm_collapse_huge_page_isolate(folio_page(folio, 0),
> > + none_or_zero, referenced,
> > + writable, result);
> > return result;
>
> trace_mm_collapse_huge_page_isolate() should take a folio.
>
> > release_pte_pages(pte, _pte, compound_pagelist);
> > - trace_mm_collapse_huge_page_isolate(&folio->page, none_or_zero,
> > - referenced, writable, result);
> > + trace_mm_collapse_huge_page_isolate(folio_page(folio, 0),
> > + none_or_zero, referenced,
> > + writable, result);
> > return result;
>
> ditto.
>
> > out:
> > - trace_mm_khugepaged_scan_pmd(mm, &folio->page, writable, referenced,
> > - none_or_zero, result, unmapped);
> > + trace_mm_khugepaged_scan_pmd(mm, folio_page(folio, 0), writable,
> > + referenced, none_or_zero, result,
> > + unmapped);
> > return result;
>
> ditto,
>
> > result = install_pmd
> > - ? set_huge_pmd(vma, haddr, pmd, &folio->page)
> > + ? set_huge_pmd(vma, haddr, pmd, folio_page(folio, 0))
> > : SCAN_SUCCEED;
>
> I feel that set_huge_pmd() should take a folio.