Re: [PATCH RFC 1/1] mm/filemap: handle large folio split race in page cache lookups
From: Chris Arges
Date: Fri Mar 06 2026 - 15:59:09 EST
On 2026-03-06 20:21:59, Kiryl Shutsemau wrote:
> On Fri, Mar 06, 2026 at 02:11:22PM -0600, Chris Arges wrote:
> > On 2026-03-06 16:28:19, Matthew Wilcox wrote:
> > > On Fri, Mar 06, 2026 at 02:13:26PM +0000, Kiryl Shutsemau wrote:
> > > > On Thu, Mar 05, 2026 at 07:24:38PM +0000, Matthew Wilcox wrote:
> > > > > folio_split() needs to be sure that it's the only one holding a reference
> > > > > to the folio. To that end, it calculates the expected refcount of the
> > > > > folio, and freezes it (sets the refcount to 0 if the refcount is the
> > > > > expected value). Once filemap_get_entry() has incremented the refcount,
> > > > > freezing will fail.
> > > > >
> > > > > But of course, we can race. filemap_get_entry() can load a folio first,
> > > > > the entire folio_split can happen, then it calls folio_try_get() and
> > > > > succeeds, but it no longer covers the index we were looking for. That's
> > > > > what the xas_reload() is trying to prevent -- if the index is for a
> > > > > folio which has changed, then the xas_reload() should come back with a
> > > > > different folio and we goto repeat.
> > > > >
> > > > > So how did we get through this with a reference to the wrong folio?
> > > >
> > > > What would xas_reload() return if we raced with split and index pointed
> > > > to a tail page before the split?
> > > >
> > > > Wouldn't it return the folio that was a head and check will pass?
> > >
> > > It's not supposed to return the head in this case. But, check the code:
> > >
> > > if (!node)
> > > return xa_head(xas->xa);
> > > if (IS_ENABLED(CONFIG_XARRAY_MULTI)) {
> > > offset = (xas->xa_index >> node->shift) & XA_CHUNK_MASK;
> > > entry = xa_entry(xas->xa, node, offset);
> > > if (!xa_is_sibling(entry))
> > > return entry;
> > > offset = xa_to_sibling(entry);
> > > }
> > > return xa_entry(xas->xa, node, offset);
> > >
> > > (obviously CONFIG_XARRAY_MULTI is enabled)
> > >
> > Yes we have this CONFIG enabled.
> >
> > Also FWIW, happy to run some additional experiments or more debugging. We _can_
> > reproduce this, as a machine hits this about every day on a sample of ~128
> > machines. We also do get crashdumps so we can poke around there as needed.
> >
> > I was going to deploy this patch onto a subset of machines, but reading through
> > this thread I'm a bit concerned if a retry doesn't actually fix the problem,
> > then we will just loop on this condition and hang.
>
> I would be useful to know if the condition is persistent or if retry
> "fixes" the problem.
Fair enough. I suppose it's either crashing or locking up. Will deploy early
next week and see what happens.
--chris