Re: [PATCH 1/3] mm/page_alloc: add per-migratetype counts to buddy allocator

From: Barry Song

Date: Fri Nov 28 2025 - 19:34:18 EST


On Fri, Nov 28, 2025 at 11:12 AM Hongru Zhang <zhanghongru06@xxxxxxxxx> wrote:
>
[...]
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index ed82ee55e66a..9431073e7255 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -818,6 +818,7 @@ static inline void __add_to_free_list(struct page *page, struct zone *zone,
> else
> list_add(&page->buddy_list, &area->free_list[migratetype]);
> area->nr_free++;
> + area->mt_nr_free[migratetype]++;
>
> if (order >= pageblock_order && !is_migrate_isolate(migratetype))
> __mod_zone_page_state(zone, NR_FREE_PAGES_BLOCKS, nr_pages);
> @@ -840,6 +841,8 @@ static inline void move_to_free_list(struct page *page, struct zone *zone,
> get_pageblock_migratetype(page), old_mt, nr_pages);
>
> list_move_tail(&page->buddy_list, &area->free_list[new_mt]);
> + area->mt_nr_free[old_mt]--;
> + area->mt_nr_free[new_mt]++;

The overhead comes from effectively counting twice. Have we checked whether
the readers of area->nr_free are on a hot path? If not, we might just drop
nr_free and compute the sum each time.

Buddyinfo and compaction do not seem to be on a hot path ?

Thanks
Barry