Re: [PATCH] mm/memory hotplug/unplug: Optimize zone contiguous check when changing pfn range
From: David Hildenbrand (Arm)
Date: Mon Mar 23 2026 - 07:48:20 EST
On 3/23/26 12:31, Mike Rapoport wrote:
> On Mon, Mar 23, 2026 at 11:56:35AM +0100, David Hildenbrand (Arm) wrote:
>> On 3/19/26 10:56, Yuan Liu wrote:
>
> ...
>
>>> diff --git a/mm/mm_init.c b/mm/mm_init.c
>>> index df34797691bd..96690e550024 100644
>>> --- a/mm/mm_init.c
>>> +++ b/mm/mm_init.c
>>> @@ -946,6 +946,7 @@ static void __init memmap_init_zone_range(struct zone *zone,
>>> unsigned long zone_start_pfn = zone->zone_start_pfn;
>>> unsigned long zone_end_pfn = zone_start_pfn + zone->spanned_pages;
>>> int nid = zone_to_nid(zone), zone_id = zone_idx(zone);
>>> + unsigned long zone_hole_start, zone_hole_end;
>>>
>>> start_pfn = clamp(start_pfn, zone_start_pfn, zone_end_pfn);
>>> end_pfn = clamp(end_pfn, zone_start_pfn, zone_end_pfn);
>>> @@ -957,8 +958,19 @@ static void __init memmap_init_zone_range(struct zone *zone,
>>> zone_end_pfn, MEMINIT_EARLY, NULL, MIGRATE_MOVABLE,
>>> false);
>>>
>>> - if (*hole_pfn < start_pfn)
>>> + WRITE_ONCE(zone->pages_with_online_memmap,
>>> + READ_ONCE(zone->pages_with_online_memmap) +
>>> + (end_pfn - start_pfn));
>>> +
>>> + if (*hole_pfn < start_pfn) {
>>> init_unavailable_range(*hole_pfn, start_pfn, zone_id, nid);
>>> + zone_hole_start = clamp(*hole_pfn, zone_start_pfn, zone_end_pfn);
>>> + zone_hole_end = clamp(start_pfn, zone_start_pfn, zone_end_pfn);
>>> + if (zone_hole_start < zone_hole_end)
>>> + WRITE_ONCE(zone->pages_with_online_memmap,
>>> + READ_ONCE(zone->pages_with_online_memmap) +
>>> + (zone_hole_end - zone_hole_start));
>>> + }
>>
>> The range can have larger holes without a memmap, and I think we would be
>> missing pages handled by the other init_unavailable_range() call?
>>
>>
>> There is one question for Mike, though: couldn't it happen that the
>> init_unavailable_range() call in memmap_init() would initialize
>> the memmap outside of the node/zone span?
>
> Yes, and it most likely will.
>
> Very common example is page 0 on x86 systems:
>
> [ 0.012196] DMA [mem 0x0000000000001000-0x0000000000ffffff]
> [ 0.012221] On node 0, zone DMA: 1 pages in unavailable ranges
> [ 0.012205] Early memory node ranges
> [ 0.012206] node 0: [mem 0x0000000000001000-0x000000000009efff]
>
> The unavailable page in zone DMA is the page from 0x0 to 0x1000 that is
> neither in node 0 nor in zone DMA.
>
> For ZONE_NORMAL it would be a more pathological case when zone/node span
> ends in a middle of a section, but that's still possible.
>
>> If so, I wonder whether we would want to adjust the node+zone space to
>> include these ranges.
>>
>> Later memory onlining could make these ranges suddenly fall into the
>> node/zone span.
>
> But doesn't memory onlining always happen at section boundaries?
Sure, but assume ZONE_NORMAL ends in the middle of a section, and then
you hotplug the next section.
Then, the zone spans that memmap. zone->pages_with_online_memmap will be
wrong.
Once we unplug the hotplugged section, zone shrinking code will stumble
over the whole-pfns and assume they belong to the zone.
zone->pages_with_online_memmap will be wrong.
zone->pages_with_online_memmap being wrong means that it is smaller than
it should. I guess, it would not be broken, but we would fail to detect
contiguous zones.
If there would be an easy way to avoid that, that would be cleaner.
--
Cheers,
David