Re: [PATCH] mm, KMSAN: Add missing shadow memory initialization in special allocation paths
From: Vlastimil Babka (SUSE)
Date: Mon Mar 30 2026 - 12:40:02 EST
+Cc KMSAN folks, please review
On 3/30/26 10:36, Ke Zhao wrote:
> Some page allocation paths that call post_alloc_hook() but skip
> kmsan_alloc_page(), leaving stale KMSAN shadow on allocated pages.
> Fix this by explicitly calling kmsan_alloc_page() after they
> successfully get new pages.
>
> Reported-by: syzbot+2aee6839a252e612ce34@xxxxxxxxxxxxxxxxxxxxxxxxx
FYI the report thread:
https://lore.kernel.org/all/698f1877.a70a0220.2c38d7.00c2.GAE@xxxxxxxxxx/
> Closes: https://syzkaller.appspot.com/bug?extid=2aee6839a252e612ce34
Did syzbot confirm it as fix? Wonder if this submission alone will trigger
that check without some syz test command or whatnot.
>
> Signed-off-by: Ke Zhao <ke.zhao.kernel@xxxxxxxxx>
> ---
> mm/page_alloc.c | 13 +++++++++++++
> 1 file changed, 13 insertions(+)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 2d4b6f1a554e..6435e8708ef4 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -5189,6 +5189,10 @@ unsigned long alloc_pages_bulk_noprof(gfp_t gfp, int preferred_nid,
>
> prep_new_page(page, 0, gfp, 0);
> set_page_refcounted(page);
> +
> + trace_mm_page_alloc(page, 0, gfp, ac.migratetype);
Probably makes sense to add that here, yeah.
> + kmsan_alloc_page(page, 0, gfp);
> +
> page_array[nr_populated++] = page;
> }
>
> @@ -6911,6 +6915,12 @@ static void split_free_frozen_pages(struct list_head *list, gfp_t gfp_mask)
> int i;
>
> post_alloc_hook(page, order, gfp_mask);
> + /*
> + * Initialize KMSAN state right after post_alloc_hook().
> + * This prepares the pages for subsequent outer callers
> + * that might free sub-pages after the split.
> + */
> + kmsan_alloc_page(page, order, gfp_mask);
> if (!order)
> continue;
>
> @@ -7117,6 +7127,9 @@ int alloc_contig_frozen_range_noprof(unsigned long start, unsigned long end,
>
> check_new_pages(head, order);
> prep_new_page(head, order, gfp_mask, 0);
> +
> + trace_mm_page_alloc(page, order, gfp_mask, get_pageblock_migratetype(page));
But I'm not sure we want to use this trace event here, minimally it would be
inconsistent with the branch above using split_free_frozen_pages()?
> + kmsan_alloc_page(page, order, gfp_mask);
> } else {
> ret = -EINVAL;
> WARN(true, "PFN range: requested [%lu, %lu), allocated [%lu, %lu)\n",
>
> ---
> base-commit: bbeb83d3182abe0d245318e274e8531e5dd7a948
> change-id: 20260325-fix-kmsan-e291f752a949
>
> Best regards,