RE: [PATCH v3] mm/memory hotplug/unplug: Optimize zone contiguous check when changing pfn range

From: Liu, Yuan1

Date: Fri Apr 17 2026 - 02:35:08 EST



> >>> sashiko had several comments
> >>> https://sashiko.dev/#/patchset/20260408031615.1831922-1-
> >> yuan1.liu%40intel.com
> >>>
> >>> I skipped the ones related to hotplug, but in the mm_init part the
> >> comment
> >>> about zones that can have overlapping physical spans when mirrored
> >>> kernelcore is enabled seems valid.
> >
> > Hi David & Mike
> >
> > I’ve spent some time working through these issues to better understand
> them.
> > For the overlapping physical spans(mirrored kernelcore), should I avoid
> counting
> > overlap_memmap_init in memmap_init_range in the next version?
> > For example, change it as follows:
> >
> > +unsigned long __meminit
> > +memmap_init_range(unsigned long size, int nid, unsigned long zone,
> > + unsigned long start_pfn,
> > + unsigned long zone_end_pfn,
> > enum meminit_context context,
> > struct vmem_altmap *altmap, int migratetype,
> > bool isolate_pageblock)
> > {
> > unsigned long pfn, end_pfn = start_pfn + size;
> > + unsigned long nr_init = 0;
> > struct page *page;
> >
> > if (highest_memmap_pfn < end_pfn - 1)
> > @@ -893,7 +897,7 @@ void __meminit memmap_init_range(unsigned long size,
> int nid, unsigned long zone
> > if (zone == ZONE_DEVICE) {
> > if (!altmap)
> > - return;
> > + return 0;
> >
> > if (start_pfn == altmap->base_pfn)
> > start_pfn += altmap->reserve;
> > @@ -911,6 +915,7 @@ void __meminit memmap_init_range(unsigned long size,
> int nid, unsigned long zone
> > if (defer_init(nid, pfn, zone_end_pfn)) {
> > deferred_struct_pages = true;
> > + nr_init += end_pfn - pfn;
>
> It's confusing. Could the remaining range also include overlapping inits?
>
> Maybe the whole "skip overlapping init" should actually be handled on a
> higher level?
>
> I guess we'd want to skip any memblock_is_mirror(r) regions entirely.
>
> @Mike?

Hi Mike

David suggested moving the overlap handling to a higher level and
skipping memblock_is_mirror() regions entirely. I think this makes sense.

Would this work for you, or do you have a different preference?

Something like this
static void __init memmap_init(void)
{
...
for_each_mem_pfn_range(i, MAX_NUMNODES, &start_pfn, &end_pfn, &nid) {
struct pglist_data *node = NODE_DATA(nid);
bool is_mirror = mirrored_kernelcore &&
memblock_is_mirror(&memblock.memory.regions[i]);

for (j = 0; j < MAX_NR_ZONES; j++) {
...
if (is_mirror && j == ZONE_MOVABLE)
continue;

memmap_init_zone_range(zone, start_pfn, end_pfn,
&hole_pfn);

Best Regards,
Liu, Yuan1