Re: [PATCH 01/49] mm/sparse: fix vmemmap accounting imbalance on memory hotplug error
From: Mike Rapoport
Date: Mon Apr 13 2026 - 08:10:27 EST
On Mon, Apr 13, 2026 at 05:49:17PM +0800, Muchun Song wrote:
>
>
> > On Apr 13, 2026, at 17:35, Mike Rapoport <rppt@xxxxxxxxxx> wrote:
> >
> > On Mon, Apr 13, 2026 at 12:19:50PM +0300, Mike Rapoport wrote:
> >> On Sun, Apr 05, 2026 at 08:51:52PM +0800, Muchun Song wrote:
> >>> In section_activate(), if populate_section_memmap() fails, the error
> >>> handling path calls section_deactivate() to roll back the state. This
> >>> approach introduces an accounting imbalance.
> >>>
> >>> Since the commit c3576889d87b ("mm: fix accounting of memmap pages"),
> >>> memmap pages are accounted for only after populate_section_memmap()
> >>> succeeds. However, section_deactivate() unconditionally decrements the
> >>> vmemmap account. Consequently, a failure in populate_section_memmap()
> >>> leads to a negative offset (underflow) in the system's vmemmap tracking.
> >>>
> >>> We can fix this by ensuring that the vmemmap accounting is incremented
> >>> immediately before checking for the success of populate_section_memmap().
> >>> If populate_section_memmap() fails, the subsequent call to
> >>> section_deactivate() will decrement the accounting, perfectly offsetting
> >>> the increment and maintaining balance.
> >>>
> >>> Fixes: c3576889d87b ("mm: fix accounting of memmap pages")
> >>> Signed-off-by: Muchun Song <songmuchun@xxxxxxxxxxxxx>
> >>> ---
> >>> mm/sparse-vmemmap.c | 2 +-
> >>> 1 file changed, 1 insertion(+), 1 deletion(-)
> >>>
> >>> diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c
> >>> index 6eadb9d116e4..ee27d0c0efe2 100644
> >>> --- a/mm/sparse-vmemmap.c
> >>> +++ b/mm/sparse-vmemmap.c
> >>> @@ -822,11 +822,11 @@ static struct page * __meminit section_activate(int nid, unsigned long pfn,
> >>> return pfn_to_page(pfn);
> >>>
> >>> memmap = populate_section_memmap(pfn, nr_pages, nid, altmap, pgmap);
> >>> + memmap_pages_add(DIV_ROUND_UP(nr_pages * sizeof(struct page), PAGE_SIZE));
> >>
> >> This logically belongs to success path in populate_section_memmap(). If we
> >> update the counter there, we won't need to temporarily increase it at all.
> >
> > Not strictly related to this patchset, but it seems, we can have a single
> > memmap_boot_pages_add() in memmap_alloc() rather than to update the counter
> > in memmap_alloc() callers.
>
> It will indeed become simpler and is a good cleanup direction, but there
> is a slight change in semantics: the page tables used for vmemmap page
> mapping will also be counted in memmap_boot_pages_add(). This might not
> be an issue (after all, the size of the page tables is very small compared
> to struct pages, right?).
>
> Additionally, I still lean toward making no changes to this patch, because
> this is a pure bugfix patch — of course, it is meant to facilitate backporting
> for those who need it. The cleanup would involve many more changes, so I
> prefer to do that in a separate patch. What do you think?
For this patch and easy backporting I still think that cleaner to have the
counter incremented in populate_section_memmap() rather immediately after
it.
> Thanks,
> Muchun.
>
> >
> >>> if (!memmap) {
> >>> section_deactivate(pfn, nr_pages, altmap);
> >>> return ERR_PTR(-ENOMEM);
> >>> }
> >>> - memmap_pages_add(DIV_ROUND_UP(nr_pages * sizeof(struct page), PAGE_SIZE));
> >>>
> >>> return memmap;
> >>> }
> >>> --
> >>> 2.20.1
> >>>
> >>
> >> --
> >> Sincerely yours,
> >> Mike.
> >
> > --
> > Sincerely yours,
> > Mike.
>
>
>
--
Sincerely yours,
Mike.