Re: [PATCH v1] mm/vmscan: Add retry logic for cgroups with memory.low in kswapd
From: Michal Hocko
Date: Fri Nov 07 2025 - 08:22:16 EST
Sorry for late reply.
On Mon 20-10-25 10:11:23, Jiayuan Chen wrote:
[...]
> To provide more context about our specific setup:
>
> 1. The memory.low values set on host pods are actually quite large,
> some pods are set to 10GB, others to 20GB, etc.
> 2. Since most pods have memory limits configured, each time kswapd
> is woken up, if a pod's memory usage hasn't exceeded its own
> memory.low, its memory won't be reclaimed.
> 3. When applications start up, rapidly consume memory, or experience
> network traffic bursts, the kernel reaches steal_suitable_fallback(),
> which sets watermark_boost and subsequently wakes kswapd.
> 4. In the core logic of kswapd thread (balance_pgdat()), when reclaim is
> triggered by watermark_boost, the maximum priority is 10. Higher priority
> values mean less aggressive LRU scanning, which can result in no pages
> being reclaimed during a single scan cycle:
>
> if (nr_boost_reclaim && sc.priority == DEF_PRIORITY - 2)
> raise_priority = false;
>
> 5. This eventually causes pgdat->kswapd_failures to continuously accumulate,
> exceeding MAX_RECLAIM_RETRIES, and consequently kswapd stops working.
> At this point, the system's available memory is still significantly above
> the high watermark—it's inappropriate for kswapd to stop under these
> conditions.
>
> The final observable issue is that a brief period of rapid memory allocation
> causes kswapd to stop running, ultimately triggering direct reclaim and
> making the applications unresponsive.
This to me sounds like something to be addressed in the watermark
boosting code. I do not think we should be breaching low limit for that
(opportunistic) reclaim.
--
Michal Hocko
SUSE Labs