Re: [PATCH v3] mm/memory hotplug/unplug: Optimize zone contiguous check when changing pfn range

From: Mike Rapoport

Date: Thu Apr 09 2026 - 10:40:20 EST


On Wed, Apr 08, 2026 at 09:36:14AM +0200, David Hildenbrand (Arm) wrote:
> On 4/8/26 05:16, Yuan Liu wrote:
> > When move_pfn_range_to_zone() or remove_pfn_range_from_zone() updates a
> > zone, set_zone_contiguous() rescans the entire zone pageblock-by-pageblock
> > to rebuild zone->contiguous. For large zones this is a significant cost
> > during memory hotplug and hot-unplug.
> >
> > Add a new zone member pages_with_online_memmap that tracks the number of
> > pages within the zone span that have an online memory map (including present
> > pages and memory holes whose memory map has been initialized). When
> > spanned_pages == pages_with_online_memmap the zone is contiguous and
> > pfn_to_page() can be called on any PFN in the zone span without further
> > pfn_valid() checks.
> >
> > Only pages that fall within the current zone span are accounted towards
> > pages_with_online_memmap. A "too small" value is safe, it merely prevents
> > detecting a contiguous zone.
> >
> > The following test cases of memory hotplug for a VM [1], tested in the
> > environment [2], show that this optimization can significantly reduce the
> > memory hotplug time [3].
> >
> > +----------------+------+---------------+--------------+----------------+
> > | | Size | Time (before) | Time (after) | Time Reduction |
> > | +------+---------------+--------------+----------------+
> > | Plug Memory | 256G | 10s | 3s | 70% |
> > | +------+---------------+--------------+----------------+
> > | | 512G | 36s | 7s | 81% |
> > +----------------+------+---------------+--------------+----------------+
> >
> > +----------------+------+---------------+--------------+----------------+
> > | | Size | Time (before) | Time (after) | Time Reduction |
> > | +------+---------------+--------------+----------------+
> > | Unplug Memory | 256G | 11s | 4s | 64% |
> > | +------+---------------+--------------+----------------+
> > | | 512G | 36s | 9s | 75% |
> > +----------------+------+---------------+--------------+----------------+
> >
> > [1] Qemu commands to hotplug 256G/512G memory for a VM:
> > object_add memory-backend-ram,id=hotmem0,size=256G/512G,share=on
> > device_add virtio-mem-pci,id=vmem1,memdev=hotmem0,bus=port1
> > qom-set vmem1 requested-size 256G/512G (Plug Memory)
> > qom-set vmem1 requested-size 0G (Unplug Memory)
> >
> > [2] Hardware : Intel Icelake server
> > Guest Kernel : v7.0-rc4
> > Qemu : v9.0.0
> >
> > Launch VM :
> > qemu-system-x86_64 -accel kvm -cpu host \
> > -drive file=./Centos10_cloud.qcow2,format=qcow2,if=virtio \
> > -drive file=./seed.img,format=raw,if=virtio \
> > -smp 3,cores=3,threads=1,sockets=1,maxcpus=3 \
> > -m 2G,slots=10,maxmem=2052472M \
> > -device pcie-root-port,id=port1,bus=pcie.0,slot=1,multifunction=on \
> > -device pcie-root-port,id=port2,bus=pcie.0,slot=2 \
> > -nographic -machine q35 \
> > -nic user,hostfwd=tcp::3000-:22
> >
> > Guest kernel auto-onlines newly added memory blocks:
> > echo online > /sys/devices/system/memory/auto_online_blocks
> >
> > [3] The time from typing the QEMU commands in [1] to when the output of
> > 'grep MemTotal /proc/meminfo' on Guest reflects that all hotplugged
> > memory is recognized.
> >
> > Reported-by: Nanhai Zou <nanhai.zou@xxxxxxxxx>
> > Reported-by: Chen Zhang <zhangchen.kidd@xxxxxx>
> > Tested-by: Yuan Liu <yuan1.liu@xxxxxxxxx>
> > Reviewed-by: Tim Chen <tim.c.chen@xxxxxxxxxxxxxxx>
> > Reviewed-by: Qiuxu Zhuo <qiuxu.zhuo@xxxxxxxxx>
> > Reviewed-by: Yu C Chen <yu.c.chen@xxxxxxxxx>
> > Reviewed-by: Pan Deng <pan.deng@xxxxxxxxx>
> > Reviewed-by: Nanhai Zou <nanhai.zou@xxxxxxxxx>
> > Co-developed-by: Tianyou Li <tianyou.li@xxxxxxxxx>
> > Signed-off-by: Tianyou Li <tianyou.li@xxxxxxxxx>
> > Signed-off-by: Yuan Liu <yuan1.liu@xxxxxxxxx>
> > Acked-by: David Hildenbrand (Arm) <david@xxxxxxxxxx>
> > ---
>
> [...]
>
> > @@ -842,7 +842,7 @@ overlap_memmap_init(unsigned long zone, unsigned long *pfn)
> > * zone/node above the hole except for the trailing pages in the last
> > * section that will be appended to the zone/node below.
> > */
> > -static void __init init_unavailable_range(unsigned long spfn,
> > +static unsigned long __init init_unavailable_range(unsigned long spfn,
> > unsigned long epfn,
> > int zone, int node)
> > {
> > @@ -858,6 +858,7 @@ static void __init init_unavailable_range(unsigned long spfn,
> > if (pgcnt)
> > pr_info("On node %d, zone %s: %lld pages in unavailable ranges\n",
> > node, zone_names[zone], pgcnt);
> > + return pgcnt;
> > }
> >
> > /*
> > @@ -956,9 +957,22 @@ static void __init memmap_init_zone_range(struct zone *zone,
> > memmap_init_range(end_pfn - start_pfn, nid, zone_id, start_pfn,
> > zone_end_pfn, MEMINIT_EARLY, NULL, MIGRATE_MOVABLE,
> > false);
> > + zone->pages_with_online_memmap += end_pfn - start_pfn;
> >
> > - if (*hole_pfn < start_pfn)
> > - init_unavailable_range(*hole_pfn, start_pfn, zone_id, nid);
> > + if (*hole_pfn < start_pfn) {
> > + unsigned long pgcnt;
> > +
> > + if (*hole_pfn < zone_start_pfn) {
> > + init_unavailable_range(*hole_pfn, zone_start_pfn,
> > + zone_id, nid);
> > + pgcnt = init_unavailable_range(zone_start_pfn,
> > + start_pfn, zone_id, nid);
>
> Indentation of parameters.
>
> > + } else {
> > + pgcnt = init_unavailable_range(*hole_pfn, start_pfn,
> > + zone_id, nid);
>
>
> Same here.
>
> > + }
> > + zone->pages_with_online_memmap += pgcnt;
> > + }
>
>
> Maybe something like the following could make it nicer to read, just a
> thought.
>
>
> unsigned long hole_start_pfn = *hole_pfn;
>
> if (hole_start_pfn < zone_start_pfn) {
> init_unavailable_range(hole_start_pfn, zone_start_pfn,
> zone_id, nid);
> hole_start_pfn = zone_start_pfn;
> }
> pgcnt = init_unavailable_range(hole_start_pfn, start_pfn,
> zone_id, nid);
>

Yeah, this looks better :)

sashiko had several comments
https://sashiko.dev/#/patchset/20260408031615.1831922-1-yuan1.liu%40intel.com

I skipped the ones related to hotplug, but in the mm_init part the comment
about zones that can have overlapping physical spans when mirrored
kernelcore is enabled seems valid.

> --
> Cheers,
> David

--
Sincerely yours,
Mike.