Re: [PATCH] mm/mglru: fix cgroup OOM during MGLRU state switching
From: Barry Song
Date: Mon Mar 02 2026 - 04:21:37 EST
On Mon, Mar 2, 2026 at 4:25 PM Yafang Shao <laoar.shao@xxxxxxxxx> wrote:
>
> On Mon, Mar 2, 2026 at 4:00 PM Kairui Song <ryncsn@xxxxxxxxx> wrote:
> >
> > On Mon, Mar 2, 2026 at 3:43 PM Yafang Shao <laoar.shao@xxxxxxxxx> wrote:
> > >
> > > On Mon, Mar 2, 2026 at 2:58 PM Barry Song <21cnbao@xxxxxxxxx> wrote:
> > > >
> > > > I assume latency is not a concern for a very rare
> > > > MGLRU on/off case. Do you require the switch to happen
> > > > with zero latency?
> > > > My main concern is the correctness of the code.
> > > >
> > > > Now the proposed patch is:
> > > >
> > > > + bool lrugen_enabled = smp_load_acquire(&lruvec->lrugen.enabled);
> > > > + bool lru_draining = smp_load_acquire(&lruvec->lrugen.draining);
> > > >
> > > > Then choose MGLRU or active/inactive LRU based on
> > > > those values.
> > > >
> > > > However, nothing prevents those values from changing
> > > > after they are read. Even within the shrink path,
> > > > they can still change.
> >
> > Hi all,
> >
> > > If these values are changed during reclaim, the currently running
> > > reclaimer will continue to operate with the old settings, while any
> > > new reclaimer processes will adopt the new values. This approach
> > > should prevent any immediate issues, but the primary risk of this
> > > lockless method is the potential for a user to rapidly toggle the
> > > MGLRU feature, particularly during an intermediate state.
> > >
> > > >
> > > > So I think we need an rwsem or something similar here —
> > > > a read lock for shrink and a write lock for on/off. The
> > > > write lock should happen very rarely.
> > >
> > > We can introduce a lock-based mechanism in v2.
> >
> > I hope we don't need a lock here. Currently there is only a static
> > key, this patch is already adding more branches, a lock will make
> > things more complex and the shrinking path is quite performance
> > sensitive.
> >
> > > >
> > > > To be honest, the on/off toggle is quite odd. If possible,
> > > > I’d prefer not to switch MGLRU or active/inactive
> > > > dynamically. Once it’s set up during system boot, it
> > > > should remain unchanged.
> > >
> > > While it is well-suited for Android environments, it is not viable for
> > > Kubernetes production servers, where rebooting is highly disruptive.
> > > This limitation is precisely why we need to introduce dynamic toggles.
> >
> > I agree with Barry, the switch isn't supposed to be a knob to be
> > turned on/off frequently. And I think in the long term we should just
> > identify the workloads where MGLRU doesn't work well, and fix MGLRU.
>
> The challenge we're currently facing is that we don't yet know which
> workloads would benefit from it ;)
> We do want to enable mglru on our production servers, but first we
> need to address the risk of OOM during the switch—that's exactly why
> we're proposing this patch.
Nobody objects to your intention to fix it. I’m curious: to what
extent do we want to fix it? Do we aim to merely reduce the probability
of OOM and other mistakes, or do we want a complete fix that makes
the dynamic on/off fully safe?
Currently, many places appear fragile, mainly because
`lru_gen_enabled()` checks a global variable that doesn’t accurately
reflect where folios are during switching. A full fix might require
guarding the shrinking path against the switching path to prevent
simultaneous execution, which would add unnecessary complexity for a
rarely used "feature".
If our goal is only to reduce the probability of mistakes, I feel your
current patch may be fine, even though some race conditions
remain in principle.
Thanks
Barry