Re: [PATCH v5 2/2] mm/memory hotplug: fix zone->contiguous always false when hotplug
From: Li, Tianyou
Date: Fri Dec 12 2025 - 00:35:28 EST
On 12/11/2025 1:16 PM, Oscar Salvador wrote:
On Mon, Dec 08, 2025 at 11:25:44PM +0800, Tianyou Li wrote:
From: Yuan Liu <yuan1.liu@xxxxxxxxx>Then, this means that zone->contiguous was always left false on new
Function set_zone_contiguous used __pageblock_pfn_to_page to
check the whole pageblock is in the same zone. One assumption is
the memory section must online, otherwise the __pageblock_pfn_to_page
will return NULL, then the set_zone_contiguous will be false.
When move_pfn_range_to_zone invoked set_zone_contiguous, since the
memory section did not online, the return value will always be false.
memory-hotplug operations, if it was false before?
I guess we did not notice this before because it is just an optimization
so we always took the long road.
Nice catch.
Thanks Oscar. Yuan Liu has done the test and compared the results thus found this issue.
To fix this issue, we removed the set_zone_contiguous from theUhm, I think this deserves a little comment.
move_pfn_range_to_zone, and place it after memory section onlined.
Function remove_pfn_range_from_zone did not have this issue because
memory section remains online at the time set_zone_contiguous invoked.
Reviewed-by: Tianyou Li <tianyou.li@xxxxxxxxx>
Reviewed-by: Nanhai Zou <nanhai.zou@xxxxxxxxx>
Signed-off-by: Yuan Liu <yuan1.liu@xxxxxxxxx>
---
mm/memory_hotplug.c | 14 ++++++++++----
1 file changed, 10 insertions(+), 4 deletions(-)
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index d711f6e2c87f..f548d9180415 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -809,8 +809,7 @@ void move_pfn_range_to_zone(struct zone *zone, unsigned long start_pfn,
{
struct pglist_data *pgdat = zone->zone_pgdat;
int nid = pgdat->node_id;
- const enum zone_contig_state contiguous_state =
- zone_contig_state_after_growing(zone, start_pfn, nr_pages);
+
clear_zone_contiguous(zone);
if (zone_is_empty(zone))
@@ -840,8 +839,6 @@ void move_pfn_range_to_zone(struct zone *zone, unsigned long start_pfn,
memmap_init_range(nr_pages, nid, zone_idx(zone), start_pfn, 0,
MEMINIT_HOTPLUG, altmap, migratetype,
isolate_pageblock);
-
- set_zone_contiguous(zone, contiguous_state);
}
struct auto_movable_stats {
@@ -1150,6 +1147,7 @@ int mhp_init_memmap_on_memory(unsigned long pfn, unsigned long nr_pages,
{
unsigned long end_pfn = pfn + nr_pages;
int ret, i;
+ enum zone_contig_state contiguous_state = ZONE_CONTIG_NO;
ret = kasan_add_zero_shadow(__va(PFN_PHYS(pfn)), PFN_PHYS(nr_pages));
if (ret)
@@ -1164,6 +1162,10 @@ int mhp_init_memmap_on_memory(unsigned long pfn, unsigned long nr_pages,
if (mhp_off_inaccessible)
page_init_poison(pfn_to_page(pfn), sizeof(struct page) * nr_pages);
+ if (IS_ALIGNED(end_pfn, PAGES_PER_SECTION))
+ contiguous_state = zone_contig_state_after_growing(zone, pfn,
+ nr_pages);
I guess that if we are not allocating memmap pages worth of a full
section, we keep it ZONE_CONTIG_NO.
Discussed with Yuan. We will add the comments as you suggested in patch v6. Thanks.
Regards,
Tianyou