Re: [PATCHv4] mm: skip CMA pages when they are not available

From: Zhaoyang Huang
Date: Sun May 28 2023 - 21:03:07 EST


On Sat, May 27, 2023 at 3:36 AM David Hildenbrand <david@xxxxxxxxxx> wrote:
>
> On 22.05.23 08:36, zhaoyang.huang wrote:
> > From: Zhaoyang Huang <zhaoyang.huang@xxxxxxxxxx>
> >
> > This patch fixes unproductive reclaiming of CMA pages by skipping them when they
> > are not available for current context. It is arise from bellowing OOM issue, which
> > caused by large proportion of MIGRATE_CMA pages among free pages.
> >
> > [ 36.172486] [03-19 10:05:52.172] ActivityManager: page allocation failure: order:0, mode:0xc00(GFP_NOIO), nodemask=(null),cpuset=foreground,mems_allowed=0
> > [ 36.189447] [03-19 10:05:52.189] DMA32: 0*4kB 447*8kB (C) 217*16kB (C) 124*32kB (C) 136*64kB (C) 70*128kB (C) 22*256kB (C) 3*512kB (C) 0*1024kB 0*2048kB 0*4096kB = 35848kB
> > [ 36.193125] [03-19 10:05:52.193] Normal: 231*4kB (UMEH) 49*8kB (MEH) 14*16kB (H) 13*32kB (H) 8*64kB (H) 2*128kB (H) 0*256kB 1*512kB (H) 0*1024kB 0*2048kB 0*4096kB = 3236kB
> > ...
> > [ 36.234447] [03-19 10:05:52.234] SLUB: Unable to allocate memory on node -1, gfp=0xa20(GFP_ATOMIC)
> > [ 36.234455] [03-19 10:05:52.234] cache: ext4_io_end, object size: 64, buffer size: 64, default order: 0, min order: 0
> > [ 36.234459] [03-19 10:05:52.234] node 0: slabs: 53,objs: 3392, free: 0
> >
> > Signed-off-by: Zhaoyang Huang <zhaoyang.huang@xxxxxxxxxx>
> > ---
> > v2: update commit message and fix build error when CONFIG_CMA is not set
> > v3,v4: update code and comments
> > ---
> > ---
> > mm/vmscan.c | 23 ++++++++++++++++++++++-
> > 1 file changed, 22 insertions(+), 1 deletion(-)
> >
> > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > index bd6637f..20facec 100644
> > --- a/mm/vmscan.c
> > +++ b/mm/vmscan.c
> > @@ -2193,6 +2193,26 @@ static __always_inline void update_lru_sizes(struct lruvec *lruvec,
> >
> > }
> >
> > +#ifdef CONFIG_CMA
> > +/*
> > + * It is waste of effort to scan and reclaim CMA pages if it is not available
> > + * for current allocation context
> > + */
>
> /*
> * Only movable allocations may end up on MIGRATE_CMA pageblocks. If
> * we're not dealing with a movable allocation, it doesn't make sense to
> * reclaim from these pageblocks: the reclaimed memory is unusable for
> * this allocation.
> */
>
> Did I get it right?
Yes, it is right.
>
> > +static bool skip_cma(struct folio *folio, struct scan_control *sc)
> > +{
> > + if (!current_is_kswapd() &&
> > + gfp_migratetype(sc->gfp_mask) != MIGRATE_MOVABLE &&
> > + get_pageblock_migratetype(&folio->page) == MIGRATE_CMA)
> > + return true;
> > + return false;
>
> return !current_is_kswapd() &&
> gfp_migratetype(sc->gfp_mask) != MIGRATE_MOVABLE &&
> get_pageblock_migratetype(&folio->page) == MIGRATE_CMA;
ok, thanks
>
>
> --
> Thanks,
>
> David / dhildenb
>