Re: [PATCH] mm/sparse: Fix race on mem_section->usage in pfn walkers
From: Andrew Morton
Date: Wed Apr 15 2026 - 01:45:49 EST
On Wed, 15 Apr 2026 10:23:26 +0800 Muchun Song <songmuchun@xxxxxxxxxxxxx> wrote:
> When memory is hot-removed, section_deactivate() can tear down
> mem_section->usage while concurrent pfn walkers still inspect the
> subsection map via pfn_section_valid() or pfn_section_first_valid().
>
> After commit 5ec8e8ea8b77 ("mm/sparsemem: fix race in accessing
> memory_section->usage") converted the teardown to an RCU-based
> scheme, the code still relies on SECTION_HAS_MEM_MAP becoming visible
> to readers before ms->usage is cleared and queued for freeing.
>
> That ordering is not guaranteed. section_deactivate() can clear
> ms->usage and queue kfree_rcu() before another CPU observes the
> SECTION_HAS_MEM_MAP clear. A concurrent pfn walker can therefore see
> valid_section() return true, enter its sched-RCU read-side critical
> section after kfree_rcu() has already been queued, and then dereference
> a stale ms->usage pointer.
Then what happens? Can it oops?
> And pfn_to_online_page() can call pfn_section_valid() without its
> own sched-RCU read-side critical section, which has similar problem.
>
> The race looks like this:
>
> compact_zone() memunmap_pages
> ============== ==============
> __remove_pages()->
> sparse_remove_section()->
> section_deactivate():
> a) [ Clear SECTION_HAS_MEM_MAP
> is reordered to b) ]
> kfree_rcu(ms->usage)
> __pageblock_pfn_to_page
> ......
> pfn_valid():
> rcu_read_lock_sched()
> valid_section() // return true
> pfn_section_valid()
> [Access ms->usage which is UAF]
> WRITE_ONCE(ms->usage, NULL)
> rcu_read_unlock_sched() b) Clear SECTION_HAS_MEM_MAP
>
> Fix this by using rcu_replace_pointer() when clearing ms->usage in
> section_deactivate(), then it does not rely on the order of clearing
> of SECTION_HAS_MEM_MAP.
>
> Fixes: 5ec8e8ea8b77 ("mm/sparsemem: fix race in accessing memory_section->usage")
December 2023.
> Signed-off-by: Muchun Song <songmuchun@xxxxxxxxxxxxx>
> ---
> This patch is focused on the ms->usage lifetime race only.
>
> ...
>
> I am not fully sure whether that reasoning is correct, or whether current
> callers are expected to rely on additional hotplug serialization instead.
> Comments on whether this is a real issue, and how the vmemmap lifetime is
> expected to be handled here, would be very helpful.
Thanks. Quite a bit for consideration.
> --- a/mm/sparse-vmemmap.c
> +++ b/mm/sparse-vmemmap.c
> @@ -601,8 +601,10 @@ static void section_deactivate(unsigned long pfn, unsigned long nr_pages,
> * was allocated during boot.
> */
> if (!PageReserved(virt_to_page(ms->usage))) {
> - kfree_rcu(ms->usage, rcu);
> - WRITE_ONCE(ms->usage, NULL);
> + struct mem_section_usage *usage;
> +
> + usage = rcu_replace_pointer(ms->usage, NULL, true);
> + kfree_rcu(usage, rcu);
> }
> memmap = pfn_to_page(SECTION_ALIGN_DOWN(pfn));
> }
This part isn't applicable to 7.0 - it depends on material I've sent to
Linus for 7.1-rc1.
So for now I'll drop this into mm-unstable to get it some runtime
testing. If people like this patch and we decide to proceed with it
then I can make it a hotfix for 7.1-rcX. But the -stable people will
be wanting a backportable version of it, if we decide to backport,