Re: [PATCH v2 1/2] mm/mglru: fix cgroup OOM during MGLRU state switching

From: Barry Song

Date: Thu Mar 12 2026 - 02:03:05 EST


On Wed, Mar 11, 2026 at 8:11 PM Leno Hou via B4 Relay
<devnull+lenohou.gmail.com@xxxxxxxxxx> wrote:
>
> From: Leno Hou <lenohou@xxxxxxxxx>
>
> When the Multi-Gen LRU (MGLRU) state is toggled dynamically, a race
> condition exists between the state switching and the memory reclaim
> path. This can lead to unexpected cgroup OOM kills, even when plenty of
> reclaimable memory is available.
>
> Problem Description
> ==================
>
> The issue arises from a "reclaim vacuum" during the transition.
>
> 1. When disabling MGLRU, lru_gen_change_state() sets lrugen->enabled to
> false before the pages are drained from MGLRU lists back to
> traditional LRU lists.
> 2. Concurrent reclaimers in shrink_lruvec() see lrugen->enabled as false
> and skip the MGLRU path.
> 3. However, these pages might not have reached the traditional LRU lists
> yet, or the changes are not yet visible to all CPUs due to a lack of
> synchronization.
> 4. get_scan_count() subsequently finds traditional LRU lists empty,
> concludes there is no reclaimable memory, and triggers an OOM kill.
>
> A similar race can occur during enablement, where the reclaimer sees
> the new state but the MGLRU lists haven't been populated via
> fill_evictable() yet.
>
>
> Solution
> =======
>
> Introduce a 'draining' state (`lru_drain_core`) to bridge the
> transition. When transitioning, the system enters this intermediate state
> where the reclaimer is forced to attempt both MGLRU and traditional reclaim
> paths sequentially. This ensures that folios remain visible to at least
> one reclaim mechanism until the transition is fully materialized across all
> CPUs.
>
> Changes
> =======
>
> - Adds a static branch `lru_drain_core` to track the transition state.
> - Updates shrink_lruvec(), shrink_node(), and kswapd_age_node() to allow
> a "joint reclaim" period during the transition.
> - Ensures all LRU helpers correctly identify page state by checking
> folio_lru_gen(folio) != -1 instead of relying solely on global flags.
>
> This effectively eliminates the race window that previously triggered OOMs
> under high memory pressure.

I don't think this eliminates the race window, but it does reduce it.
There is nothing preventing the draining state from changing while
you are shrinking.

for example:
t1: t2:
lru_gen_draining() = false;

Drain mglru


Drain mglru only....

>
> The issue was consistently reproduced on v6.1.157 and v6.18.3 using
> a high-pressure memory cgroup (v1) environment.
>
> To: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
> To: Axel Rasmussen <axelrasmussen@xxxxxxxxxx>
> To: Yuanchu Xie <yuanchu@xxxxxxxxxx>
> To: Wei Xu <weixugc@xxxxxxxxxx>
> To: Barry Song <21cnbao@xxxxxxxxx>
> To: Jialing Wang <wjl.linux@xxxxxxxxx>
> To: Yafang Shao <laoar.shao@xxxxxxxxx>
> To: Yu Zhao <yuzhao@xxxxxxxxxx>
> To: Kairui Song <ryncsn@xxxxxxxxx>
> To: Bingfang Guo <bfguo@xxxxxxxxxx>
> Cc: linux-mm@xxxxxxxxx
> Cc: linux-kernel@xxxxxxxxxxxxxxx
> Signed-off-by: Leno Hou <lenohou@xxxxxxxxx>
> ---
> include/linux/mm_inline.h | 5 +++++
> mm/rmap.c | 2 +-
> mm/swap.c | 14 ++++++++------
> mm/vmscan.c | 49 ++++++++++++++++++++++++++++++++++++++---------
> 4 files changed, 54 insertions(+), 16 deletions(-)
>
> diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h
> index fa2d6ba811b5..e6443e22bf67 100644
> --- a/include/linux/mm_inline.h
> +++ b/include/linux/mm_inline.h
> @@ -321,6 +321,11 @@ static inline bool lru_gen_in_fault(void)
> return false;
> }
>
> +static inline int folio_lru_gen(const struct folio *folio)
> +{
> + return -1;
> +}
> +
> static inline bool lru_gen_add_folio(struct lruvec *lruvec, struct folio *folio, bool reclaiming)
> {
> return false;
> diff --git a/mm/rmap.c b/mm/rmap.c
> index 0f00570d1b9e..488bcdca65ed 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -958,7 +958,7 @@ static bool folio_referenced_one(struct folio *folio,
> return false;
> }
>
> - if (lru_gen_enabled() && pvmw.pte) {
> + if ((folio_lru_gen(folio) != -1) && pvmw.pte) {

I am not quite sure if a folio's gen is set to -1 when it is isolated
from MGLRU for reclamation. If so, I don't think this would work.

Thanks
Barry