Re: [PATCH 2/2] mm, vmscan: flush TLB for every 31 folios evictions

From: Kairui Song

Date: Mon Mar 09 2026 - 09:20:24 EST


On Mon, Mar 9, 2026 at 8:42 PM Usama Arif <usama.arif@xxxxxxxxx> wrote:
>
> On Mon, 09 Mar 2026 16:17:42 +0800 Zhang Peng via B4 Relay <devnull+zippermonkey.icloud.com@xxxxxxxxxx> wrote:
>
> > From: bruzzhang <bruzzhang@xxxxxxxxxxx>
> >
> > Currently we flush TLB for every dirty folio, which is a bottleneck for
> > systems with many cores as this causes heavy IPI usage.
> >
> > So instead, batch the folios, and flush once for every 31 folios (one
> > folio_batch). These folios will be held in a folio_batch releasing their
> > lock, then when folio_batch is full, do following steps:
> >
> > - For each folio: lock - check still evictable - unlock
> > - If no longer evictable, return the folio to the caller.
> > - Flush TLB once for the batch
> > - Pageout the folios (refcount freeze happens in the pageout path)
> >
> > Note we can't hold a frozen folio in folio_batch for long as it will
> > cause filemap/swapcache lookup to livelock. Fortunately pageout usually
> > won't take too long; sync IO is fast, and non-sync IO will be issued
> > with the folio marked writeback.
> >
> > Suggested-by: Kairui Song <kasong@xxxxxxxxxxx>
> > Signed-off-by: bruzzhang <bruzzhang@xxxxxxxxxxx>
> > ---
> > mm/vmscan.c | 68 ++++++++++++++++++++++++++++++++++++++++++++++++++++++-------
> > 1 file changed, 61 insertions(+), 7 deletions(-)

...

> > folio_batch_init(&free_folios);
> > + folio_batch_init(&flush_folios);
> > +
> > memset(stat, 0, sizeof(*stat));
> > cond_resched();
> > do_demote_pass = can_demote(pgdat->node_id, sc, memcg);
> > @@ -1578,15 +1624,19 @@ static void shrink_folio_list(struct list_head *folio_list,
> > goto keep_locked;
> > if (!sc->may_writepage)
> > goto keep_locked;
> > -
> > /*
> > - * Folio is dirty. Flush the TLB if a writable entry
> > - * potentially exists to avoid CPU writes after I/O
> > - * starts and then write it out here.
> > + * For anon, we should only see swap cache (anon) and
> > + * the list pinning the page. For file page, the filemap
> > + * and the list pins it. Combined with the page_ref_freeze
> > + * in pageout_batch ensure nothing else touches the page
> > + * during lock unlocked.
> > */
>
> page_ref_freeze happens inside pageout_one() -> pageout() -> __remove_mapping(),
> which runs after the folio is re-locked and after the TLB flush. During
> the unlocked window, the refcount is not frozen. Right?
>
> With this patch, the folio is unlocked before try_to_unmap_flush_dirty() runs
> in pageout_batch(). During this window, TLB entries on other CPUs could allow
> writes to the folio after it has been selected for pageout. My understanding
> is that the original code intentionally flushed TLB while the folio was locked
> to prevent this? Could there be data corruption can result if a write through
> a stale TLB entry races with the pageout I/O?

Hi Usama,

Thanks for the review. Yeah the comment here seems wrong, I agree with you.

Hi, Peng, I think you might have used some stall comment, at least
page_ref_freeze doesn't exist here and that doesn't seem to be how
this patch works currently. Can you help double check and update?

These folios are kept in the batch unlocked and unfreeze. Also,
unmapped. They could get mapped again or touched, so the batch flush
should relocks the folios and redo some routines before that unmap
before, and if they are still in a ready to be freed status, then
flush and do the IO, then free.

BTW some checks seem missing in the batch check? eg. folio_maybe_dma_pinned.