Re: [PATCH v2] mm/alloc_tag: clear codetag for pages allocated before page_ext initialization

From: Hao Ge

Date: Fri Mar 27 2026 - 04:21:27 EST


Hi Suren


On 2026/3/27 09:19, Suren Baghdasaryan wrote:
On Thu, Mar 26, 2026 at 6:11 PM Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> wrote:
On Thu, 26 Mar 2026 22:05:54 +0800 Hao Ge <hao.ge@xxxxxxxxx> wrote:

Due to initialization ordering, page_ext is allocated and initialized
relatively late during boot. Some pages have already been allocated
and freed before page_ext becomes available, leaving their codetag
uninitialized.

A clear example is in init_section_page_ext(): alloc_page_ext() calls
kmemleak_alloc(). If the slab cache has no free objects, it falls back
to the buddy allocator to allocate memory. However, at this point page_ext
is not yet fully initialized, so these newly allocated pages have no
codetag set. These pages may later be reclaimed by KASAN, which causes
the warning to trigger when they are freed because their codetag ref is
still empty.

Use a global array to track pages allocated before page_ext is fully
initialized. The array size is fixed at 8192 entries, and will emit
a warning if this limit is exceeded. When page_ext initialization
completes, set their codetag to empty to avoid warnings when they
are freed later.

Thanks. I'll queue this for review and test.

But where will I queue it?
I don't think it's extra urgent. It is visible only when debugging
with CONFIG_MEM_ALLOC_PROFILING_DEBUG.

Fixes: 93d5440ece3c ("alloc_tag: uninline code gated by mem_alloc_profiling_key in page allocator")
Hmm. I'm not sure that's the right patch. Technically the problem
exists once we introduced CONFIG_MEM_ALLOC_PROFILING_DEBUG. I'll
double-check.


I believe this should be Fixes: dcfe378c81f72 ("lib: introduce support for page allocation tagging").

Earlier I thought backporting this commit here would be quite involved,

but after further consideration, this is indeed the commit being fixed.


A year ago, so a cc:stable might be needed.

+#ifdef CONFIG_MEM_ALLOC_PROFILING_DEBUG
otoh, it appears that the bug only hits with
CONFIG_MEM_ALLOC_PROFILING_DEBUG=y? If so, I'll add that (important)
info to the changelog.
Correct, it affects only CONFIG_MEM_ALLOC_PROFILING_DEBUG=y and only
if !mem_profiling_compressed.

Do people use CONFIG_MEM_ALLOC_PROFILING_DEBUG much? Is a backport
really needed?
IMO backport would be good.

Either way, it seems that this isn't a very urgent issue so I'm
inclined to add it to the 7.1-rc1 pile, perhaps with a cc:stable.

Please all share your thoughts with me, thanks.
I'm reviewing and testing the patch and there is a race and a couple
of smaller issues. I'll post a reply later today.

Thank you so much for your kind help! I really appreciate it.

Thanks

Hao