Re: [PATCH] mm/mglru: fix cgroup OOM during MGLRU state switching

From: Barry Song

Date: Mon Mar 02 2026 - 03:17:06 EST


On Mon, Mar 2, 2026 at 4:00 PM Kairui Song <ryncsn@xxxxxxxxx> wrote:
>
> On Mon, Mar 2, 2026 at 3:43 PM Yafang Shao <laoar.shao@xxxxxxxxx> wrote:
> >
> > On Mon, Mar 2, 2026 at 2:58 PM Barry Song <21cnbao@xxxxxxxxx> wrote:
> > >
> > > I assume latency is not a concern for a very rare
> > > MGLRU on/off case. Do you require the switch to happen
> > > with zero latency?
> > > My main concern is the correctness of the code.
> > >
> > > Now the proposed patch is:
> > >
> > > + bool lrugen_enabled = smp_load_acquire(&lruvec->lrugen.enabled);
> > > + bool lru_draining = smp_load_acquire(&lruvec->lrugen.draining);
> > >
> > > Then choose MGLRU or active/inactive LRU based on
> > > those values.
> > >
> > > However, nothing prevents those values from changing
> > > after they are read. Even within the shrink path,
> > > they can still change.
>
> Hi all,
>
> > If these values are changed during reclaim, the currently running
> > reclaimer will continue to operate with the old settings, while any
> > new reclaimer processes will adopt the new values. This approach
> > should prevent any immediate issues, but the primary risk of this
> > lockless method is the potential for a user to rapidly toggle the
> > MGLRU feature, particularly during an intermediate state.
> >
> > >
> > > So I think we need an rwsem or something similar here —
> > > a read lock for shrink and a write lock for on/off. The
> > > write lock should happen very rarely.
> >
> > We can introduce a lock-based mechanism in v2.
>
> I hope we don't need a lock here. Currently there is only a static
> key, this patch is already adding more branches, a lock will make
> things more complex and the shrinking path is quite performance
> sensitive.

I agree that the shrinking path is performance-sensitive. However, the
bottleneck occurs when we move folios out of the LRU, performing
reference checks by scanning PTEs with rmap, unmapping, and compressing
memory. I believe that either the branch or the readlock is too small
to noticeably affect shrink performance.

Thanks
Barry