Re: [RFC v2 0/3] Decoupling large folios dependency on THP
From: Matthew Wilcox
Date: Fri Feb 27 2026 - 10:27:07 EST
On Fri, Feb 27, 2026 at 09:45:07AM +0100, David Hildenbrand (Arm) wrote:
> I guess it would be rather trivial to just replace
> add_to_page_cache_lru() by filemap_add_folio() in below code.
In the Ottawa interpretation, that's true, but I'd prefer not to revisit
this code when transitioning to the New York interpretation. This is
the NOMMU code after all, and the less time we spend on it, the better.
> > So either we need to reimplement all the good stuff that folio_split()
> > does for us, or we need to make folio_split() available on nommu.
>
> folio splitting usually involves unmapping pages, which is rather
> cumbersome on nommu ;) So we'd have to think about that and the
> implications.
Depending on your point of view, either everything is mapped on nommu,
or nothing is mapped ;-) In any case, the folio is freshly-allocated
and locked, so there's no chance anybody has mapped it yet.
> ramfs_nommu_expand_for_mapping() is all about allocating memory, not
> splitting something that might already in use somewhere.
>
> So I folio_split() on nommu is a bit weird in that context.
Well, it is, but it's also exactly what we need to do -- frees folios
which are now entirely beyond i_size. And it's code that's also used on
MMU systems, and the more code that's shared, the better.
> When it comes to allocating memory, I would assume that it would be
> better (and faster!) to
>
> a) allocate a frozen high-order page
>
> b) Create the (large) folios directly on chunks of the frozen page, and
> add them through filemap_add_folio().
>
> We'd have a function that consumes a suitable page range and turns it
> into a folio (later allocates memdesc).
>
> c) Return all unused frozen bits to the page allocator
Right, we could do that. But that's more code and special code in the
nommu codebase.