RE: [PATCH v2] mm/memory hotplug/unplug: Optimize zone contiguous check when changing pfn range

From: Liu, Yuan1

Date: Mon Apr 06 2026 - 21:00:09 EST


> -----Original Message-----
> From: Mike Rapoport <rppt@xxxxxxxxxx>
> Sent: Saturday, April 4, 2026 7:12 PM
> To: Liu, Yuan1 <yuan1.liu@xxxxxxxxx>
> Cc: David Hildenbrand <david@xxxxxxxxxx>; Oscar Salvador
> <osalvador@xxxxxxx>; Wei Yang <richard.weiyang@xxxxxxxxx>; linux-
> mm@xxxxxxxxx; Hu, Yong <yong.hu@xxxxxxxxx>; Zou, Nanhai
> <nanhai.zou@xxxxxxxxx>; Tim Chen <tim.c.chen@xxxxxxxxxxxxxxx>; Zhuo, Qiuxu
> <qiuxu.zhuo@xxxxxxxxx>; Chen, Yu C <yu.c.chen@xxxxxxxxx>; Deng, Pan
> <pan.deng@xxxxxxxxx>; Li, Tianyou <tianyou.li@xxxxxxxxx>; Chen Zhang
> <zhangchen.kidd@xxxxxx>; linux-kernel@xxxxxxxxxxxxxxx
> Subject: Re: [PATCH v2] mm/memory hotplug/unplug: Optimize zone contiguous
> check when changing pfn range
>
> On Wed, Apr 01, 2026 at 03:01:55AM -0400, Yuan Liu wrote:
> > When move_pfn_range_to_zone() or remove_pfn_range_from_zone() updates a
> > zone, set_zone_contiguous() rescans the entire zone pageblock-by-
> pageblock
> > to rebuild zone->contiguous. For large zones this is a significant cost
> > during memory hotplug and hot-unplug.
>
> ...
>
> > diff --git a/Documentation/mm/physical_memory.rst
> b/Documentation/mm/physical_memory.rst
> > index b76183545e5b..e47e96ef6a6d 100644
> > --- a/Documentation/mm/physical_memory.rst
> > +++ b/Documentation/mm/physical_memory.rst
> > @@ -483,6 +483,17 @@ General
> > ``present_pages`` should use ``get_online_mems()`` to get a stable
> value. It
> > is initialized by ``calculate_node_totalpages()``.
> >
> > +``pages_with_online_memmap``
> > + Tracks pages within the zone that have an online memmap (present
> pages and
>
> Please spell out "memory map" rather then memmap in the documentation and
> in the comments.

Sure, I will fix it next version.

> > + memory holes whose memmap has been initialized). When
> ``spanned_pages`` ==
> > + ``pages_with_online_memmap``, ``pfn_to_page()`` can be performed
> without
> > + further checks on any PFN within the zone span.
> > +
> > + Note: this counter may temporarily undercount when pages with an
> online
> > + memmap exist outside the current zone span. Growing the zone to cover
> such
> > + pages and later shrinking it back may result in a "too small" value.
> This is
> > + safe: it merely prevents detecting a contiguous zone.
> > +
> > ``present_early_pages``
> > The present pages existing within the zone located on memory
> available since
> > early boot, excluding hotplugged memory. Defined only when
>
> ...
>
> > +/*
> > + * Initialize unavailable range [spfn, epfn) while accounting only the
> pages
> > + * that fall within the zone span towards pages_with_online_memmap.
> Pages
> > + * outside the zone span are still initialized but not accounted.
> > + */
> > +static void __init init_unavailable_range_for_zone(struct zone *zone,
> > + unsigned long spfn,
> > + unsigned long epfn)
> > +{
> > + int nid = zone_to_nid(zone);
> > + int zid = zone_idx(zone);
> > + unsigned long in_zone_start;
> > + unsigned long in_zone_end;
> > +
> > + in_zone_start = clamp(spfn, zone->zone_start_pfn,
> zone_end_pfn(zone));
> > + in_zone_end = clamp(epfn, zone->zone_start_pfn, zone_end_pfn(zone));
> > +
> > + if (spfn < in_zone_start)
> > + init_unavailable_range(spfn, in_zone_start, zid, nid);
> > +
> > + if (in_zone_start < in_zone_end)
> > + zone->pages_with_online_memmap +=
> > + init_unavailable_range(in_zone_start, in_zone_end,
> > + zid, nid);
> > +
> > + if (in_zone_end < epfn)
> > + init_unavailable_range(in_zone_end, epfn, zid, nid);
> > }
>
> I think we can make it simpler, see below.
>
> > /*
> > @@ -956,9 +986,10 @@ static void __init memmap_init_zone_range(struct
> zone *zone,
> > memmap_init_range(end_pfn - start_pfn, nid, zone_id, start_pfn,
> > zone_end_pfn, MEMINIT_EARLY, NULL, MIGRATE_MOVABLE,
> > false);
> > + zone->pages_with_online_memmap += end_pfn - start_pfn;
> >
> > if (*hole_pfn < start_pfn)
> > - init_unavailable_range(*hole_pfn, start_pfn, zone_id, nid);
> > + init_unavailable_range_for_zone(zone, *hole_pfn, start_pfn);
>
> Here *hole_pfn is either inside zone span or below it and in the second
> case it's enough to adjust page count returned by init_unavailable_range()
> by (zone_start_pfn - *hole_pfn).

Get it, I will refine it next version.

> > *hole_pfn = end_pfn;
> > }
> > @@ -996,8 +1027,11 @@ static void __init memmap_init(void)
> > #else
> > end_pfn = round_up(end_pfn, MAX_ORDER_NR_PAGES);
> > #endif
> > - if (hole_pfn < end_pfn)
> > - init_unavailable_range(hole_pfn, end_pfn, zone_id, nid);
> > + if (hole_pfn < end_pfn) {
> > + struct zone *zone = &NODE_DATA(nid)->node_zones[zone_id];
> > +
> > + init_unavailable_range_for_zone(zone, hole_pfn, end_pfn);
>
> Here we know that the range is not in any zone span.

Indeed, the range here does not belong to the zone span.
Thank you for your review.

> > + }
> > }
> >
>
> --
> Sincerely yours,
> Mike.