Re: [PATCH v7 2/2] mm/memory hotplug/unplug: Optimize zone->contiguous update when changes pfn range
From: David Hildenbrand (Red Hat)
Date: Thu Jan 15 2026 - 12:00:44 EST
On 1/15/26 16:09, Li, Tianyou wrote:
Sorry for my delayed response. Appreciated for all your suggestions
David. My thoughts inlined for your kind consideration.
On 1/14/2026 7:13 PM, David Hildenbrand (Red Hat) wrote:
That's a great idea! Probably we need to move
This is nasty. I would wish we could just leave that code path alone.
In particular: I am 99% sure that we never ever run into this case in
practice.
E.g., on x86, we can have up to 2 GiB memory blocks. But the memmap of
that is 64/4096*2GiB == 32 MB ... and a memory section is 128 MiB.
As commented on patch #1, we should drop the set_zone_contiguous() in
this function either way and let online_pages() deal with it.
We just have to make sure that we don't create some inconsistencies by
doing that.
Can you double-check?
I thought about this some more, and it's all a bit nasty. We have to
get this right.
Losing the optimization for memmap_on_memory users indicates that we
are doing the wrong thing.
You could introduce the set_zone_contiguous() in this patch here. But
then, I think instead of
+ /*
+ * If the allocated memmap pages are not in a full section, keep the
+ * contiguous state as ZONE_CONTIG_NO.
+ */
+ if (IS_ALIGNED(end_pfn, PAGES_PER_SECTION))
+ new_contiguous_state = zone_contig_state_after_growing(zone,
+ pfn, nr_pages);
+
We'd actually unconditionally have to do that, no?
the adjust_present_page_count right after mhp_init_memmap_on_memory to
keep the present_pages in sync.
When the zone is contiguous previously, there probably 2 situations:
1. If new added range is at the start of the zone, then
after mhp_init_memmap_on_memory, the zone contiguous will be false at
fast path. It should be expected as long as the
adjust_present_page_count was called before the online_pages,
so zone_contig_state_after_growing in online_pages will return
ZONE_CONTIG_MAYBE, which is expected. Then the set_zone_contiguous in
online_pages will get correct result.
2. If new added range is at the end of the zone, then
the zone_contig_state_after_growing will return ZONE_CONTIG_YES or
ZONE_CONTIG_NO, regardless of the memory section online or not. When the
contiguous check comes into the online_pages, it will follow the fast
path and get the correct contiguous state.
When the zone is not contiguous previously, in any case the new zone
contiguous state will be false in mhp_init_memmap_on_memory, because
either the nr_vmemmap_pages can not fill the hole or the memory section
is not online. If we update the present_pages correctly, then in
online_pages, the zone_contig_state_after_growing could have the chance
to return ZONE_CONTIG_MAYBE, and since all memory sections are onlined,
the set_zone_contiguous will get the correct result.
I am not sure if you'd like to consider another option: could we
encapsulate the mhp_init_memmap_on_memory and online_pages into one
function eg. online_memory_block_pages, and offline_memory_block_pages
correspondingly as well, in the memory_hotplug.c. So we can check the
zone contiguous state as the whole for the new added range.
int online_memory_block_pages(unsigned long start_pfn, unsigned long
nr_pages,
unsigned long nr_vmemmap_pages, struct zone *zone,
struct memory_group *group)
{
bool contiguous = zone->contiguous;
enum zone_contig_state new_contiguous_state;
int ret;
/*
* Calculate the new zone contig state before move_pfn_range_to_zone()
* sets the zone temporarily to non-contiguous.
*/
new_contiguous_state = zone_contig_state_after_growing(zone, start_pfn,
nr_pages);
Should we clear the flag here?
if (nr_vmemmap_pages) {
ret = mhp_init_memmap_on_memory(start_pfn, nr_vmemmap_pages,
zone);
if (ret)
goto restore_zone_contig;
}
ret = online_pages(start_pfn + nr_vmemmap_pages,
nr_pages - nr_vmemmap_pages, zone, group);
if (ret) {
if (nr_vmemmap_pages)
mhp_deinit_memmap_on_memory(start_pfn, nr_vmemmap_pages);
goto restore_zone_contig;
}
/*
* Account once onlining succeeded. If the zone was unpopulated, it is
* now already properly populated.
*/
if (nr_vmemmap_pages)
adjust_present_page_count(pfn_to_page(start_pfn), mem->group,
nr_vmemmap_pages);
/*
* Now that the ranges are indicated as online, check whether the whole
* zone is contiguous.
*/
set_zone_contiguous(zone, new_contiguous_state);
return 0;
restore_zone_contig:
zone->contiguous = contiguous;
return ret;
}
That is even better, although it sucks to have to handle it on that level, and that it's so far away from actual zone resizing code.
Should we do the same on the offlining path?
--
Cheers
David