Re: [PATCH RFC 1/1] mm/filemap: handle large folio split race in page cache lookups

From: Chris Arges

Date: Fri Mar 06 2026 - 15:11:40 EST


On 2026-03-06 16:28:19, Matthew Wilcox wrote:
> On Fri, Mar 06, 2026 at 02:13:26PM +0000, Kiryl Shutsemau wrote:
> > On Thu, Mar 05, 2026 at 07:24:38PM +0000, Matthew Wilcox wrote:
> > > folio_split() needs to be sure that it's the only one holding a reference
> > > to the folio. To that end, it calculates the expected refcount of the
> > > folio, and freezes it (sets the refcount to 0 if the refcount is the
> > > expected value). Once filemap_get_entry() has incremented the refcount,
> > > freezing will fail.
> > >
> > > But of course, we can race. filemap_get_entry() can load a folio first,
> > > the entire folio_split can happen, then it calls folio_try_get() and
> > > succeeds, but it no longer covers the index we were looking for. That's
> > > what the xas_reload() is trying to prevent -- if the index is for a
> > > folio which has changed, then the xas_reload() should come back with a
> > > different folio and we goto repeat.
> > >
> > > So how did we get through this with a reference to the wrong folio?
> >
> > What would xas_reload() return if we raced with split and index pointed
> > to a tail page before the split?
> >
> > Wouldn't it return the folio that was a head and check will pass?
>
> It's not supposed to return the head in this case. But, check the code:
>
> if (!node)
> return xa_head(xas->xa);
> if (IS_ENABLED(CONFIG_XARRAY_MULTI)) {
> offset = (xas->xa_index >> node->shift) & XA_CHUNK_MASK;
> entry = xa_entry(xas->xa, node, offset);
> if (!xa_is_sibling(entry))
> return entry;
> offset = xa_to_sibling(entry);
> }
> return xa_entry(xas->xa, node, offset);
>
> (obviously CONFIG_XARRAY_MULTI is enabled)
>
Yes we have this CONFIG enabled.

Also FWIW, happy to run some additional experiments or more debugging. We _can_
reproduce this, as a machine hits this about every day on a sample of ~128
machines. We also do get crashdumps so we can poke around there as needed.

I was going to deploy this patch onto a subset of machines, but reading through
this thread I'm a bit concerned if a retry doesn't actually fix the problem,
then we will just loop on this condition and hang.

--chris

> !node is almost certainly not true -- that's only the case if there's a
> single entry at offset 0, and we're talking about a situation where we
> have a large folio.
>
> I think we have two cases to consider; one where we've allocated a new
> node because we split an entry from order >=6 to order <6, and one where
> we just split an entry that stays at the same level in the tree.
>
> So let's say we're looking up an entry at index 1499 and first we got
> a folio that is at index 1024 order 9. So first, let's look at what
> happens if it's split into two order-8 folios. We get a reference on the
> first one, then we calculate offset as ((1499 >> 6) & 63) which is 23.
> Unless folio splitting is buggy, the original folio is in slot 16 and
> has sibling entries in 17,18,19 and the new folio is in slot 20 and has
> sibling entries in 21,22,23. So we should find a sibling entry in slot
> 23 that points to 20, then return the new folio in slot 20 which would
> mismatch the old folio that we got a refcount on.
>
> Then let's consider what happens if we split the index at 1499 into an
> order-0 folio. folio split allocated a new node and put it at offset 23
> (and populated the new node, but we don't need to be concerned with that
> here). This time the lookup finds the new node and actually returns the
> node instead of a folio. But that's OK, because we'ree just checking
> for pointer equality, and there's no way this node compares equal to
> any folio we found (not least because it has a low bit set to indicate
> this is a node and not a pointer). So again the pointer equality check
> fails and we drop the speculative refcount we obtained and retry the loop.
>
> Have I missed something? Maybe a memory ordering problem?