Re: [PATCH v1] mm/vmscan: mitigate spurious kswapd_failures reset from direct reclaim
From: Shakeel Butt
Date: Tue Jan 06 2026 - 12:45:49 EST
On Tue, Jan 06, 2026 at 05:25:42AM +0000, Jiayuan Chen wrote:
> January 5, 2026 at 12:51, "Shakeel Butt" <shakeel.butt@xxxxxxxxx mailto:shakeel.butt@xxxxxxxxx?to=%22Shakeel%20Butt%22%20%3Cshakeel.butt%40linux.dev%3E > wrote:
>
>
>
> > I think the simplest solution for you is to enable swap to have more
> > reclaimable memory on the system. Hopefully you will have workingset of
> > the workloads fully in memory on each node.
> >
> > You can try to change application/workload to be more numa aware and
> > balance their anon memory on the given nodes but I think that would much
> > more involved and error prone.
>
> Enabling swap is one solution, but due to historical reasons we haven't
> enabled it - our disk performance is relatively poor. zram is also an
> option, but the migration would take significant time.
Beside zram, You can try zswap with memory.zswap.writeback=0 to avoid
disk for swap. I would suggest to try swap (zswap or swap on zram) on
couple of impacted machines to see if the issue you are seeing is
resolved.