Re: [PATCH mm-unstable v3] mm/page_alloc: keep track of free highatomic
From: Vlastimil Babka
Date: Mon Oct 28 2024 - 14:34:00 EST
On 10/28/24 19:26, Yu Zhao wrote:
> OOM kills due to vastly overestimated free highatomic reserves were
> observed:
>
> ... invoked oom-killer: gfp_mask=0x100cca(GFP_HIGHUSER_MOVABLE), order=0 ...
> Node 0 Normal free:1482936kB boost:0kB min:410416kB low:739404kB high:1068392kB reserved_highatomic:1073152KB ...
> Node 0 Normal: 1292*4kB (ME) 1920*8kB (E) 383*16kB (UE) 220*32kB (ME) 340*64kB (E) 2155*128kB (UE) 3243*256kB (UE) 615*512kB (U) 1*1024kB (M) 0*2048kB 0*4096kB = 1477408kB
>
> The second line above shows that the OOM kill was due to the following
> condition:
>
> free (1482936kB) - reserved_highatomic (1073152kB) = 409784KB < min (410416kB)
>
> And the third line shows there were no free pages in any
> MIGRATE_HIGHATOMIC pageblocks, which otherwise would show up as type
> 'H'. Therefore __zone_watermark_unusable_free() underestimated the
> usable free memory by over 1GB, which resulted in the unnecessary OOM
> kill above.
>
> The comments in __zone_watermark_unusable_free() warns about the
> potential risk, i.e.,
>
> If the caller does not have rights to reserves below the min
> watermark then subtract the high-atomic reserves. This will
> over-estimate the size of the atomic reserve but it avoids a search.
>
> However, it is possible to keep track of free pages in reserved
> highatomic pageblocks with a new per-zone counter nr_free_highatomic
> protected by the zone lock, to avoid a search when calculating the
> usable free memory. And the cost would be minimal, i.e., simple
> arithmetics in the highatomic alloc/free/move paths.
>
> Note that since nr_free_highatomic can be relatively small, using a
> per-cpu counter might cause too much drift and defeat its purpose,
> in addition to the extra memory overhead.
>
> Reported-by: Link Lin <linkl@xxxxxxxxxx>
> Signed-off-by: Yu Zhao <yuzhao@xxxxxxxxxx>
> Acked-by: David Rientjes <rientjes@xxxxxxxxxx>
Acked-by: Vlastimil Babka <vbabka@xxxxxxx>
For LTS benefit I'd also add:
Cc: <stable@xxxxxxxxxxxxxxx> # v6.12+
> ---
> include/linux/mmzone.h | 1 +
> mm/page_alloc.c | 10 +++++++---
> 2 files changed, 8 insertions(+), 3 deletions(-)
>
> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> index 2e8c4307c728..5e8f567753bd 100644
> --- a/include/linux/mmzone.h
> +++ b/include/linux/mmzone.h
> @@ -825,6 +825,7 @@ struct zone {
> unsigned long watermark_boost;
>
> unsigned long nr_reserved_highatomic;
> + unsigned long nr_free_highatomic;
>
> /*
> * We don't know if the memory that we're going to allocate will be
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index a78acaae6d9c..372a386f34f5 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -635,6 +635,8 @@ compaction_capture(struct capture_control *capc, struct page *page,
> static inline void account_freepages(struct zone *zone, int nr_pages,
> int migratetype)
> {
> + lockdep_assert_held(&zone->lock);
> +
> if (is_migrate_isolate(migratetype))
> return;
>
> @@ -642,6 +644,9 @@ static inline void account_freepages(struct zone *zone, int nr_pages,
>
> if (is_migrate_cma(migratetype))
> __mod_zone_page_state(zone, NR_FREE_CMA_PAGES, nr_pages);
> +
> + if (is_migrate_highatomic(migratetype))
> + WRITE_ONCE(zone->nr_free_highatomic, zone->nr_free_highatomic + nr_pages);
> }
>
> /* Used for pages not on another list */
> @@ -3117,11 +3122,10 @@ static inline long __zone_watermark_unusable_free(struct zone *z,
>
> /*
> * If the caller does not have rights to reserves below the min
> - * watermark then subtract the high-atomic reserves. This will
> - * over-estimate the size of the atomic reserve but it avoids a search.
> + * watermark then subtract the free pages reserved for highatomic.
> */
> if (likely(!(alloc_flags & ALLOC_RESERVES)))
> - unusable_free += z->nr_reserved_highatomic;
> + unusable_free += READ_ONCE(z->nr_free_highatomic);
>
> #ifdef CONFIG_CMA
> /* If allocation can't use CMA areas don't use free CMA pages */