Re: [PATCH v2 6/6] mm/mm_init: Fix pageblock migratetype for ZONE_DEVICE compound pages

From: Muchun Song

Date: Wed Apr 15 2026 - 22:06:56 EST




> On Apr 16, 2026, at 01:03, Mike Rapoport <rppt@xxxxxxxxxx> wrote:
>
> Hi Muchun,
>
> On Wed, Apr 15, 2026 at 07:14:12PM +0800, Muchun Song wrote:
>> The memmap_init_zone_device() function only initializes the migratetype
>> of the first pageblock of a compound page. If the compound page size
>> exceeds pageblock_nr_pages (e.g., 1GB hugepages with 2MB pageblocks),
>> subsequent pageblocks in the compound page remain uninitialized.
>>
>> Move the migratetype initialization out of __init_zone_device_page()
>> and into a separate pageblock_migratetype_init_range() function. This
>> iterates over the entire PFN range of the memory, ensuring that all
>> pageblocks are correctly initialized.
>>
>> Fixes: c4386bd8ee3a ("mm/memremap: add ZONE_DEVICE support for compound pages")
>> Signed-off-by: Muchun Song <songmuchun@xxxxxxxxxxxxx>
>> ---
>> mm/mm_init.c | 42 +++++++++++++++++++++++++++---------------
>> 1 file changed, 27 insertions(+), 15 deletions(-)
>>
>> diff --git a/mm/mm_init.c b/mm/mm_init.c
>> index f9f8e1af921c..30528c4206c1 100644
>> --- a/mm/mm_init.c
>> +++ b/mm/mm_init.c
>> @@ -674,6 +674,19 @@ static inline void fixup_hashdist(void)
>> static inline void fixup_hashdist(void) {}
>> #endif /* CONFIG_NUMA */
>>
>> +static __meminit void pageblock_migratetype_init_range(unsigned long pfn,
>> + unsigned long nr_pages,
>> + int migratetype)
>> +{
>> + unsigned long end = pfn + nr_pages;
>> +
>> + for (pfn = pageblock_align(pfn); pfn < end; pfn += pageblock_nr_pages) {
>> + init_pageblock_migratetype(pfn_to_page(pfn), migratetype, false);
>> + if (IS_ALIGNED(pfn, PAGES_PER_SECTION))
>> + cond_resched();
>> + }
>> +}
>> +
>> /*
>> * Initialize a reserved page unconditionally, finding its zone first.
>> */
>> @@ -1011,21 +1024,6 @@ static void __ref __init_zone_device_page(struct page *page, unsigned long pfn,
>> page_folio(page)->pgmap = pgmap;
>> page->zone_device_data = NULL;
>>
>> - /*
>> - * Mark the block movable so that blocks are reserved for
>> - * movable at startup. This will force kernel allocations
>> - * to reserve their blocks rather than leaking throughout
>> - * the address space during boot when many long-lived
>> - * kernel allocations are made.
>> - *
>> - * Please note that MEMINIT_HOTPLUG path doesn't clear memmap
>> - * because this is done early in section_activate()
>> - */
>> - if (pageblock_aligned(pfn)) {
>> - init_pageblock_migratetype(page, MIGRATE_MOVABLE, false);
>> - cond_resched();
>> - }
>> -
>> /*
>> * ZONE_DEVICE pages other than MEMORY_TYPE_GENERIC are released
>> * directly to the driver page allocator which will set the page count
>> @@ -1122,6 +1120,8 @@ void __ref memmap_init_zone_device(struct zone *zone,
>>
>> __init_zone_device_page(page, pfn, zone_idx, nid, pgmap);
>>
>> + cond_resched();
>> +
>
> I don't think we want cond_resched() for every page here too, even it's a
> compound page :)

I'll update it to every PAGES_PER_SECTION, does this make sense for you?

Thanks,
Muchun

>
> Otherwise
>
> Reviewed-by: Mike Rapoport (Microsoft) <rppt@xxxxxxxxxx>
>
>> if (pfns_per_compound == 1)
>> continue;
>>
>> @@ -1129,6 +1129,18 @@ void __ref memmap_init_zone_device(struct zone *zone,
>> compound_nr_pages(altmap, pgmap));
>> }
>>
>> + /*
>> + * Mark the block movable so that blocks are reserved for
>> + * movable at startup. This will force kernel allocations
>> + * to reserve their blocks rather than leaking throughout
>> + * the address space during boot when many long-lived
>> + * kernel allocations are made.
>> + *
>> + * Please note that MEMINIT_HOTPLUG path doesn't clear memmap
>> + * because this is done early in section_activate()
>> + */
>> + pageblock_migratetype_init_range(start_pfn, nr_pages, MIGRATE_MOVABLE);
>> +
>> pr_debug("%s initialised %lu pages in %ums\n", __func__,
>> nr_pages, jiffies_to_msecs(jiffies - start));
>> }
>> --
>> 2.20.1
>>
>
> --
> Sincerely yours,
> Mike.