[PATCH -next v2] mm/page_alloc: fix a false memory corruption

From: Qian Cai
Date: Thu Jun 20 2019 - 16:46:30 EST


The linux-next commit "mm: security: introduce init_on_alloc=1 and
init_on_free=1 boot options" [1] introduced a false positive when
init_on_free=1 and page_poison=on, due to the page_poison expects the
pattern 0xaa when allocating pages which were overwritten by
init_on_free=1 with 0.

Fix it by switching the order between kernel_init_free_pages() and
kernel_poison_pages() in free_pages_prepare().

[1] https://patchwork.kernel.org/patch/10999465/

Signed-off-by: Qian Cai <cai@xxxxxx>
---

v2: After further debugging, the issue after switching order is likely a
separate issue as clear_page() should not cause issues with future
accesses.

mm/page_alloc.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 54dacf35d200..32bbd30c5f85 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1172,9 +1172,10 @@ static __always_inline bool free_pages_prepare(struct page *page,
PAGE_SIZE << order);
}
arch_free_page(page, order);
- kernel_poison_pages(page, 1 << order, 0);
if (want_init_on_free())
kernel_init_free_pages(page, 1 << order);
+
+ kernel_poison_pages(page, 1 << order, 0);
if (debug_pagealloc_enabled())
kernel_map_pages(page, 1 << order, 0);

--
1.8.3.1