Re: [PATCH v1] mm/vmscan: Add retry logic for cgroups with memory.low in kswapd
From: Shakeel Butt
Date: Fri Nov 07 2025 - 19:09:39 EST
On Fri, Nov 07, 2025 at 02:22:14PM +0100, Michal Hocko wrote:
> Sorry for late reply.
>
> On Mon 20-10-25 10:11:23, Jiayuan Chen wrote:
> [...]
> > To provide more context about our specific setup:
> >
> > 1. The memory.low values set on host pods are actually quite large,
> > some pods are set to 10GB, others to 20GB, etc.
> > 2. Since most pods have memory limits configured, each time kswapd
> > is woken up, if a pod's memory usage hasn't exceeded its own
> > memory.low, its memory won't be reclaimed.
> > 3. When applications start up, rapidly consume memory, or experience
> > network traffic bursts, the kernel reaches steal_suitable_fallback(),
> > which sets watermark_boost and subsequently wakes kswapd.
> > 4. In the core logic of kswapd thread (balance_pgdat()), when reclaim is
> > triggered by watermark_boost, the maximum priority is 10. Higher priority
> > values mean less aggressive LRU scanning, which can result in no pages
> > being reclaimed during a single scan cycle:
> >
> > if (nr_boost_reclaim && sc.priority == DEF_PRIORITY - 2)
> > raise_priority = false;
> >
> > 5. This eventually causes pgdat->kswapd_failures to continuously accumulate,
> > exceeding MAX_RECLAIM_RETRIES, and consequently kswapd stops working.
> > At this point, the system's available memory is still significantly above
> > the high watermark—it's inappropriate for kswapd to stop under these
> > conditions.
> >
> > The final observable issue is that a brief period of rapid memory allocation
> > causes kswapd to stop running, ultimately triggering direct reclaim and
> > making the applications unresponsive.
>
> This to me sounds like something to be addressed in the watermark
> boosting code. I do not think we should be breaching low limit for that
> (opportunistic) reclaim.
Jiayuan already posted v2 with different approach. We can move the
discussion there.
http://lore.kernel.org/20251024022711.382238-1-jiayuan.chen@xxxxxxxxx