Re: [PATCH] mm: vmscan: fix dirty folios throttling on cgroup v1 for MGLRU

From: Kairui Song

Date: Wed Apr 01 2026 - 22:45:44 EST


On Wed, Apr 01, 2026 at 01:32:40PM +0800, Shakeel Butt wrote:
> On Fri, Mar 27, 2026 at 06:21:08PM +0800, Baolin Wang wrote:
> > The balance_dirty_pages() won't do the dirty folios throttling on cgroupv1.
> > See commit 9badce000e2c ("cgroup, writeback: don't enable cgroup writeback
> > on traditional hierarchies").
> >
> > Moreover, after commit 6b0dfabb3555 ("fs: Remove aops->writepage"), we no
> > longer attempt to write back filesystem folios through reclaim.
> >
> > On large memory systems, the flusher may not be able to write back quickly
> > enough. Consequently, MGLRU will encounter many folios that are already
> > under writeback. Since we cannot reclaim these dirty folios, the system
> > may run out of memory and trigger the OOM killer.
> >
> > Hence, for cgroup v1, let's throttle reclaim after waking up the flusher,
> > which is similar to commit 81a70c21d917 ("mm/cgroup/reclaim: fix dirty
> > pages throttling on cgroup v1"), to avoid unnecessary OOM.
> >
> > The following test program can easily reproduce the OOM issue. With this patch
> > applied, the test passes successfully.
> >
> > $mkdir /sys/fs/cgroup/memory/test
> > $echo 256M > /sys/fs/cgroup/memory/test/memory.limit_in_bytes
> > $echo $$ > /sys/fs/cgroup/memory/test/cgroup.procs
> > $dd if=/dev/zero of=/mnt/data.bin bs=1M count=800
> >
> > Fixes: ac35a4902374 ("mm: multi-gen LRU: minimal implementation")
> > Reviewed-by: Barry Song <baohua@xxxxxxxxxx>
> > Reviewed-by: Kairui Song <kasong@xxxxxxxxxxx>
> > Signed-off-by: Baolin Wang <baolin.wang@xxxxxxxxxxxxxxxxx>
> > ---
> > Changes from RFC:
> > - Add the Fixes tag.
> > - Add reviewed tag from Barry and Kairui. Thanks.
> > ---
> > mm/vmscan.c | 17 ++++++++++++++++-
> > 1 file changed, 16 insertions(+), 1 deletion(-)
> >
> > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > index 46657d2cef42..b5fdad1444af 100644
> > --- a/mm/vmscan.c
> > +++ b/mm/vmscan.c
> > @@ -5036,9 +5036,24 @@ static bool try_to_shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc)
> > * If too many file cache in the coldest generation can't be evicted
> > * due to being dirty, wake up the flusher.
> > */
> > - if (sc->nr.unqueued_dirty && sc->nr.unqueued_dirty == sc->nr.file_taken)
> > + if (sc->nr.unqueued_dirty && sc->nr.unqueued_dirty == sc->nr.file_taken) {
> > + struct pglist_data *pgdat = lruvec_pgdat(lruvec);
> > +
> > wakeup_flusher_threads(WB_REASON_VMSCAN);
> >
> > + /*
> > + * For cgroupv1 dirty throttling is achieved by waking up
> > + * the kernel flusher here and later waiting on folios
> > + * which are in writeback to finish (see shrink_folio_list()).
> > + *
> > + * Flusher may not be able to issue writeback quickly
> > + * enough for cgroupv1 writeback throttling to work
> > + * on a large system.
> > + */
> > + if (!writeback_throttling_sane(sc))
> > + reclaim_throttle(pgdat, VMSCAN_THROTTLE_WRITEBACK);
>
> This seems fine but note that this throttling is not really the same as the
> throttling happening for traditional LRU. In traditional LRU, the kernel may
> throttle much more due to throttling check happening at each batch within
> shrink_inactive_list() while here the check is happening after full scan for the
> given memcg's lruvec. So, throttling can be much more aggressive for traditional
> LRU.

Right, I think Baolin's fix is great, but to improve the whole trottling
mechanism we need some rework for MGLRU, currently I posted another
series for this.

>
> This is v1 only and I don't care much but what is stopping you from moving away
> from v1?

For example memsw?

https://lore.kernel.org/linux-mm/q2x4drxpjbxcxgns6bjp446ynsxgl32ckcljqcol7posds4nit@3n3tjq35anvb/

I remember Jingxiang's plan is to improve the page counter first.