Re: [PATCH] mm: Introduce free_folio_and_swap_cache() to replace free_page_and_swap_cache()

From: David Hildenbrand
Date: Thu Apr 10 2025 - 14:37:18 EST


On 10.04.25 20:25, Matthew Wilcox wrote:
On Thu, Apr 10, 2025 at 02:16:09PM -0400, Zi Yan wrote:
@@ -49,7 +49,7 @@ static inline bool __tlb_remove_page_size(struct mmu_gather *tlb,
{
VM_WARN_ON_ONCE(delay_rmap);

- free_page_and_swap_cache(page);
+ free_folio_and_swap_cache(page_folio(page));
return false;
}

__tlb_remove_page_size() is ruining the fun of the conversion. But it will be
converted to use folio eventually.

Well, hm, I'm not sure. I haven't looked into this in detail.
We have a __tlb_remove_folio_pages() which removes N pages but they must
all be within the same folio:

VM_WARN_ON_ONCE(page_folio(page) != page_folio(page + nr_pages - 1));

but would we be better off just passing in the folio which contains the
page and always flush all pages in the folio?

The delay_rmap needs the precise pages, so we cannot easily switch to folio + nr_refs.

Once the per-page mapcounts are gone for good, we might no longer need page+nr_pages but folio+nr_refs would work.

--
Cheers,

David / dhildenb