Re: [PATCH] mm/mglru: fix cgroup OOM during MGLRU state switching
From: Yafang Shao
Date: Mon Mar 02 2026 - 21:44:56 EST
On Tue, Mar 3, 2026 at 9:40 AM Axel Rasmussen <axelrasmussen@xxxxxxxxxx> wrote:
>
> On Mon, Mar 2, 2026 at 5:34 PM Barry Song <21cnbao@xxxxxxxxx> wrote:
> >
> > On Tue, Mar 3, 2026 at 1:52 AM Yuanchu Xie <yuanchu@xxxxxxxxxx> wrote:
> > >
> > > Hi Yafang,
> > >
> > > On Mon, Mar 2, 2026 at 8:36 AM Yafang Shao <laoar.shao@xxxxxxxxx> wrote:
> > > >
> > > > On Mon, Mar 2, 2026 at 5:48 PM Kairui Song <ryncsn@xxxxxxxxx> wrote:
> > > > >
> > > > > On Mon, Mar 2, 2026 at 5:20 PM Barry Song <21cnbao@xxxxxxxxx> wrote:
> > > > > >
> > > > > > On Mon, Mar 2, 2026 at 4:25 PM Yafang Shao <laoar.shao@xxxxxxxxx> wrote:
> > > > > > >
> > > > > > > The challenge we're currently facing is that we don't yet know which
> > > > > > > workloads would benefit from it ;)
> > > > > > > We do want to enable mglru on our production servers, but first we
> > > > > > > need to address the risk of OOM during the switch—that's exactly why
> > > > > > > we're proposing this patch.
> > > > > >
> > > > > > Nobody objects to your intention to fix it. I’m curious: to what
> > > > > > extent do we want to fix it? Do we aim to merely reduce the probability
> > > > > > of OOM and other mistakes, or do we want a complete fix that makes
> > > > > > the dynamic on/off fully safe?
> > > > >
> > > > > Yeah, I'm glad that more people are trying MGLRU and improving it.
> > > > >
> > > > > We also have an downstream fix for the OOM on switch issue, but that's
> > > > > mostly as a fallback in case MGLRU doesn't work well, our goal is
> > > > > still try to enable MGLRU as much as possible,
> > > >
> > > > Our goals are aligned.
> > > > Before enabling mglru, we must first ensure it won't cause OOM errors
> > > > across multiple servers. We propose fixing this because, during our
> > > > previous mglru enablement, many instances of a single service OOM'd
> > > > simultaneously—potentially leading to data loss for that service.
> > >
> > > Would it be possible to drain the jobs away from the machine before
> > > switching LRUs? The MGLRU kill-switch could be improved, but making
> > > the switch more or less "hitless" would require significant work. Is
> > > the use case a one-time switch from active/inactive to MGLRU?
> >
> > I guess the point is that if upstream provides a sysctl to
> > toggle MGLRU on and off, then that sysctl should actually
> > work as intended. Otherwise, it would be better to remove
> > it.
>
> I think the problem is the requirements are not well specified. :)
We are planning to enable MGLRU across our large server fleet. During
a previous enablement attempt, we observed multiple instances of a
single service experiencing OOM errors simultaneously, which led to
unexpected user data loss. Despite this, we remain committed to
rolling out MGLRU to more production servers, with the critical
requirement of avoiding OOM events during the transition.
Given the scale of our fleet, it is not feasible to enable MGLRU on
servers one by one while continuously monitoring for OOM occurrences.
Therefore, we need to modify the kernel to minimize the risk of OOM
errors during the enablement process.
>
> Is it enough for the knob to function well on idle systems? Or does it
> need to function "ideally" under all conceivable workloads / stress?
> Also how do we define "ideally" - is a stray OOM kill acceptable or
> not? Is that preferable to waiting on the switch / drain to complete
> during reclaim or not? Reasonable users could disagree.
--
Regards
Yafang