Re: [PATCH] mm/hugetlb: Convert &folio->page to folio_page(folio, 0)

From: David Hildenbrand
Date: Wed Apr 09 2025 - 05:03:51 EST


On 09.04.25 05:15, Matthew Wilcox wrote:
On Tue, Apr 08, 2025 at 05:49:10PM -0700, nifan.cxl@xxxxxxxxx wrote:
From: Fan Ni <fan.ni@xxxxxxxxxxx>

Convert the use of &folio->page to folio_page(folio, 0) where struct
filio fits in. This is part of the efforts to move some fields out of
struct page to reduce its size.

Thanks for sending the patch. You've mixed together quite a few things;
I'd suggest focusing on one API at a time.

Agreed.


folio_get(folio);
- folio_add_file_rmap_pmd(folio, &folio->page, vma);
+ folio_add_file_rmap_pmd(folio, folio_page(folio, 0), vma);
add_mm_counter(mm, mm_counter_file(folio), HPAGE_PMD_NR);

I think this is fine, but would defer to David Hildenbrand.

For now this should be fine. We want a pointer at the actual first page. In some future (with folios spanning multiple PMDs), this will not be correct.

But the THP changes should *absolutely* not be included in this hugetlb patch. I was severly confused staring at the usage of folio_add_file_rmap_pmd() in hugetlb context/

Actually, having to go back to my comments below to fix them up now that I see that this is

mm/huge_memory.c | 30 +++++++++++++++++-------------
mm/hugetlb.c | 20 +++++++++++---------
mm/hugetlb_vmemmap.c | 12 ++++++------

Makes me angry.


folio_get(folio);
- folio_add_file_rmap_pud(folio, &folio->page, vma);
+ folio_add_file_rmap_pud(folio, folio_page(folio, 0), vma);
add_mm_counter(mm, mm_counter_file(folio), HPAGE_PUD_NR);

If that is fine, then so is this (put them in the same patchset).

spin_unlock(ptl);
- if (flush_needed)
- tlb_remove_page_size(tlb, &folio->page, HPAGE_PMD_SIZE);
+ if (flush_needed) {
+ tlb_remove_page_size(tlb, folio_page(folio, 0),
+ HPAGE_PMD_SIZE);
+ }

You don't need to add the extra braces here. I haven't looked into this
family of APIs; not sure if we should be passing the folio here or
if it should be taking a folio argument.

if (folio_maybe_dma_pinned(src_folio) ||
- !PageAnonExclusive(&src_folio->page)) {
+ !PageAnonExclusive(folio_page(src_folio, 0))) {
err = -EBUSY;

mmm. Another David question.

For now this should be correct. (first page mapped by the PMD stores the flag)


for (i = new_nr_pages; i < nr_pages; i += new_nr_pages) {
- struct page *new_head = &folio->page + i;
+ struct page *new_head = folio_page(folio, i);

This is definitely the right thing to do.

@@ -3403,7 +3405,7 @@ static void __split_folio_to_order(struct folio *folio, int old_order,
if (new_order)
folio_set_order(folio, new_order);
else
- ClearPageCompound(&folio->page);
+ ClearPageCompound(folio_page(folio, 0));
}

I might be inclined to leave this one alone; this whole function needs
to be rewritten as part of the folio split.

folio_split_memcg_refs(folio, old_order, split_order);
- split_page_owner(&folio->page, old_order, split_order);
+ split_page_owner(folio_page(folio, 0), old_order, split_order);
pgalloc_tag_split(folio, old_order, split_order);

Not sure if split_folio_owner is something that should exist. Haven't
looked into it.

*/
- free_page_and_swap_cache(&new_folio->page);
+ free_page_and_swap_cache(folio_page(new_folio, 0));
}

free_page_and_swap_cache() should be converted to be
free_folio_and_swap_cache().

- return __folio_split(folio, new_order, &folio->page, page, list, true);
+ return __folio_split(folio, new_order, folio_page(folio, 0), page,
+ list, true);
}

Probably right.

{
- return __folio_split(folio, new_order, split_at, &folio->page, list,
- false);
+ return __folio_split(folio, new_order, split_at, folio_page(folio, 0),
+ list, false);
}

Ditto.

- return split_huge_page_to_list_to_order(&folio->page, list, ret);
+ return split_huge_page_to_list_to_order(folio_page(folio, 0), list,
+ ret);
}

Ditto.

- if (is_migrate_isolate_page(&folio->page))
+ if (is_migrate_isolate_page(folio_page(folio, 0)))
continue;

I think we need an is_migrate_isolate_folio() instead of this.

if (folio_test_anon(folio))
- __ClearPageAnonExclusive(&folio->page);
+ __ClearPageAnonExclusive(folio_page(folio, 0));
folio->mapping = NULL;

... David.

See above.


- split_page_owner(&folio->page, huge_page_order(src), huge_page_order(dst));
+ split_page_owner(folio_page(folio, 0), huge_page_order(src),
+ huge_page_order(dst));

See earlier.

if (folio_mapcount(old_folio) == 1 && folio_test_anon(old_folio)) {
- if (!PageAnonExclusive(&old_folio->page)) {
+ if (!PageAnonExclusive(folio_page(old_folio, 0))) {
folio_move_anon_rmap(old_folio, vma);
- SetPageAnonExclusive(&old_folio->page);
+ SetPageAnonExclusive(folio_page(old_folio, 0));
}

David.

See above.


}
VM_BUG_ON_PAGE(folio_test_anon(old_folio) &&
- PageAnonExclusive(&old_folio->page), &old_folio->page);
+ PageAnonExclusive(folio_page(old_folio, 0)),
+ folio_page(old_folio, 0));

The PageAnonExclusive() part of this change is for David to comment on,
but this should be a VM_BUG_ON_FOLIO() instead of calling folio_page()
to keep this a VM_BUG_ON_PAGE().

Agreed.



--
Cheers,

David / dhildenb