Re: [PATCH 06/49] mm/mm_init: fix uninitialized pageblock migratetype for ZONE_DEVICE compound pages
From: Mike Rapoport
Date: Mon Apr 13 2026 - 05:35:30 EST
On Sun, Apr 05, 2026 at 08:51:57PM +0800, Muchun Song wrote:
> Previously, memmap_init_zone_device() only initialized the migratetype
> of the first pageblock of a compound page. If the compound page size
> exceeds pageblock_nr_pages (e.g., 1GB hugepages with 2MB pageblocks),
> subsequent pageblocks in the compound page would remain uninitialized.
>
> This patch moves the migratetype initialization out of
> __init_zone_device_page() and into a separate function
> pageblock_migratetype_init_range(). This function iterates over the
> entire PFN range of the memory, ensuring that all pageblocks are correctly
> initialized.
>
> Fixes: c4386bd8ee3a ("mm/memremap: add ZONE_DEVICE support for compound pages")
> Signed-off-by: Muchun Song <songmuchun@xxxxxxxxxxxxx>
> ---
> mm/mm_init.c | 41 ++++++++++++++++++++++++++---------------
> 1 file changed, 26 insertions(+), 15 deletions(-)
>
> diff --git a/mm/mm_init.c b/mm/mm_init.c
> index 9a44e8458fed..4936ca78966c 100644
> --- a/mm/mm_init.c
> +++ b/mm/mm_init.c
> @@ -674,6 +674,18 @@ static inline void fixup_hashdist(void)
> static inline void fixup_hashdist(void) {}
> #endif /* CONFIG_NUMA */
>
> +static __meminit void pageblock_migratetype_init_range(unsigned long pfn,
> + unsigned long nr_pages,
> + int migratetype)
> +{
> + unsigned long end = pfn + nr_pages;
> +
> + for (pfn = pageblock_align(pfn); pfn < end; pfn += pageblock_nr_pages) {
> + init_pageblock_migratetype(pfn_to_page(pfn), migratetype, false);
> + cond_resched();
Do we need to call cond_resched() every iteration here?
> + }
> +}
> +
> /*
> * Initialize a reserved page unconditionally, finding its zone first.
> */
> @@ -1011,21 +1023,6 @@ static void __ref __init_zone_device_page(struct page *page, unsigned long pfn,
> page_folio(page)->pgmap = pgmap;
> page->zone_device_data = NULL;
>
> - /*
> - * Mark the block movable so that blocks are reserved for
> - * movable at startup. This will force kernel allocations
> - * to reserve their blocks rather than leaking throughout
> - * the address space during boot when many long-lived
> - * kernel allocations are made.
> - *
> - * Please note that MEMINIT_HOTPLUG path doesn't clear memmap
> - * because this is done early in section_activate()
> - */
> - if (pageblock_aligned(pfn)) {
> - init_pageblock_migratetype(page, MIGRATE_MOVABLE, false);
> - cond_resched();
> - }
> -
> /*
> * ZONE_DEVICE pages other than MEMORY_TYPE_GENERIC are released
> * directly to the driver page allocator which will set the page count
> @@ -1122,6 +1119,8 @@ void __ref memmap_init_zone_device(struct zone *zone,
>
> __init_zone_device_page(page, pfn, zone_idx, nid, pgmap);
>
> + cond_resched();
Originally we called cond_resched() once per pageblock, now it's called
once per page plus for every pageblock in the tight loop that sets the
migrate type. Isn't it too much?
> +
> if (pfns_per_compound == 1)
> continue;
>
> @@ -1129,6 +1128,18 @@ void __ref memmap_init_zone_device(struct zone *zone,
> compound_nr_pages(altmap, pgmap));
> }
>
> + /*
> + * Mark the block movable so that blocks are reserved for
> + * movable at startup. This will force kernel allocations
> + * to reserve their blocks rather than leaking throughout
> + * the address space during boot when many long-lived
> + * kernel allocations are made.
> + *
> + * Please note that MEMINIT_HOTPLUG path doesn't clear memmap
> + * because this is done early in section_activate()
> + */
> + pageblock_migratetype_init_range(start_pfn, nr_pages, MIGRATE_MOVABLE);
> +
> pr_debug("%s initialised %lu pages in %ums\n", __func__,
> nr_pages, jiffies_to_msecs(jiffies - start));
> }
> --
> 2.20.1
>
--
Sincerely yours,
Mike.