Re: [RFC v2 0/3] Decoupling large folios dependency on THP
From: David Hildenbrand (Arm)
Date: Fri Feb 27 2026 - 03:45:58 EST
On 2/27/26 06:31, Matthew Wilcox wrote:
> On Sat, Dec 06, 2025 at 04:08:55AM +0100, Pankaj Raghav wrote:
>> There are multiple solutions to solve this problem and this is one of
>> them with minimal changes. I plan on discussing possible other solutions
>> at the talk.
>
> Here's an argument. The one remaining caller of add_to_page_cache_lru()
> is ramfs_nommu_expand_for_mapping(). Attached is a patch which
> eliminates it ... but it doesn't compile because folio_split() is
> undefined on nommu.
I guess it would be rather trivial to just replace
add_to_page_cache_lru() by filemap_add_folio() in below code.
In the current code base that should work just great unless I am missing
something important.
>
> So either we need to reimplement all the good stuff that folio_split()
> does for us, or we need to make folio_split() available on nommu.
folio splitting usually involves unmapping pages, which is rather
cumbersome on nommu ;) So we'd have to think about that and the
implications.
Could someone stumble over the large folio after already adding it to
the pagecache, but before splitting it? I guess we'd need to hold the
folio lock.
ramfs_nommu_expand_for_mapping() is all about allocating memory, not
splitting something that might already in use somewhere.
So I folio_split() on nommu is a bit weird in that context.
When it comes to allocating memory, I would assume that it would be
better (and faster!) to
a) allocate a frozen high-order page
b) Create the (large) folios directly on chunks of the frozen page, and
add them through filemap_add_folio().
We'd have a function that consumes a suitable page range and turns it
into a folio (later allocates memdesc).
c) Return all unused frozen bits to the page allocator
--
Cheers,
David