[PATCH] mm, KMSAN: Add missing shadow memory initialization in special allocation paths

From: Ke Zhao

Date: Mon Mar 30 2026 - 04:41:35 EST


Some page allocation paths that call post_alloc_hook() but skip
kmsan_alloc_page(), leaving stale KMSAN shadow on allocated pages.
Fix this by explicitly calling kmsan_alloc_page() after they
successfully get new pages.

Reported-by: syzbot+2aee6839a252e612ce34@xxxxxxxxxxxxxxxxxxxxxxxxx
Closes: https://syzkaller.appspot.com/bug?extid=2aee6839a252e612ce34

Signed-off-by: Ke Zhao <ke.zhao.kernel@xxxxxxxxx>
---
mm/page_alloc.c | 13 +++++++++++++
1 file changed, 13 insertions(+)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 2d4b6f1a554e..6435e8708ef4 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -5189,6 +5189,10 @@ unsigned long alloc_pages_bulk_noprof(gfp_t gfp, int preferred_nid,

prep_new_page(page, 0, gfp, 0);
set_page_refcounted(page);
+
+ trace_mm_page_alloc(page, 0, gfp, ac.migratetype);
+ kmsan_alloc_page(page, 0, gfp);
+
page_array[nr_populated++] = page;
}

@@ -6911,6 +6915,12 @@ static void split_free_frozen_pages(struct list_head *list, gfp_t gfp_mask)
int i;

post_alloc_hook(page, order, gfp_mask);
+ /*
+ * Initialize KMSAN state right after post_alloc_hook().
+ * This prepares the pages for subsequent outer callers
+ * that might free sub-pages after the split.
+ */
+ kmsan_alloc_page(page, order, gfp_mask);
if (!order)
continue;

@@ -7117,6 +7127,9 @@ int alloc_contig_frozen_range_noprof(unsigned long start, unsigned long end,

check_new_pages(head, order);
prep_new_page(head, order, gfp_mask, 0);
+
+ trace_mm_page_alloc(page, order, gfp_mask, get_pageblock_migratetype(page));
+ kmsan_alloc_page(page, order, gfp_mask);
} else {
ret = -EINVAL;
WARN(true, "PFN range: requested [%lu, %lu), allocated [%lu, %lu)\n",

---
base-commit: bbeb83d3182abe0d245318e274e8531e5dd7a948
change-id: 20260325-fix-kmsan-e291f752a949

Best regards,
--
Ke Zhao <ke.zhao.kernel@xxxxxxxxx>