Re: kswapd0: page allocation failure: order:0, mode:0x820(GFP_ATOMIC), nodemask=(null),cpuset=/,mems_allowed=0 (Kernel v6.5.9, 32bit ppc)

From: Yosry Ahmed
Date: Mon Jun 03 2024 - 19:24:50 EST


On Mon, Jun 3, 2024 at 3:13 PM Erhard Furtner <erhard_f@xxxxxxxxxxx> wrote:
>
> On Sun, 2 Jun 2024 20:03:32 +0200
> Erhard Furtner <erhard_f@xxxxxxxxxxx> wrote:
>
> > On Sat, 1 Jun 2024 00:01:48 -0600
> > Yu Zhao <yuzhao@xxxxxxxxxx> wrote:
> >
> > > The OOM kills on both kernel versions seem to be reasonable to me.
> > >
> > > Your system has 2GB memory and it uses zswap with zsmalloc (which is
> > > good since it can allocate from the highmem zone) and zstd/lzo (which
> > > doesn't matter much). Somehow -- I couldn't figure out why -- it
> > > splits the 2GB into a 0.25GB DMA zone and a 1.75GB highmem zone:
> > >
> > > [ 0.000000] Zone ranges:
> > > [ 0.000000] DMA [mem 0x0000000000000000-0x000000002fffffff]
> > > [ 0.000000] Normal empty
> > > [ 0.000000] HighMem [mem 0x0000000030000000-0x000000007fffffff]
> > >
> > > The kernel can't allocate from the highmem zone -- only userspace and
> > > zsmalloc can. OOM kills were due to the low memory conditions in the
> > > DMA zone where the kernel itself failed to allocate from.
> > >
> > > Do you know a kernel version that doesn't have OOM kills while running
> > > the same workload? If so, could you send that .config to me? If not,
> > > could you try disabling CONFIG_HIGHMEM? (It might not help but I'm out
> > > of ideas at the moment.)
>
> Ok, the bisect I did actually revealed something meaningful:
>
> # git bisect good
> b8cf32dc6e8c75b712cbf638e0fd210101c22f17 is the first bad commit
> commit b8cf32dc6e8c75b712cbf638e0fd210101c22f17
> Author: Yosry Ahmed <yosryahmed@xxxxxxxxxx>
> Date: Tue Jun 20 19:46:44 2023 +0000
>
> mm: zswap: multiple zpools support

Thanks for bisecting. Taking a look at the thread, it seems like you
have a very limited area of memory to allocate kernel memory from. One
possible reason why that commit can cause an issue is because we will
have multiple instances of the zsmalloc slab caches 'zspage' and
'zs_handle', which may contribute to fragmentation in slab memory.

Do you have /proc/slabinfo from a good and a bad run by any chance?

Also, could you check if the attached patch helps? It makes sure that
even when we use multiple zsmalloc zpools, we will use a single slab
cache of each type.

Attachment: 0001-mm-zsmalloc-share-slab-caches-for-all-zsmalloc-zpool.patch
Description: Binary data