Re: [PATCH 1/1] mm: prevent poison consumption when splitting THP

From: David Hildenbrand

Date: Mon Sep 29 2025 - 03:34:19 EST


On 28.09.25 05:28, Qiuxu Zhuo wrote:
From: Andrew Zaborowski <andrew.zaborowski@xxxxxxxxx>

When performing memory error injection on a THP (Transparent Huge Page)
mapped to userspace on an x86 server, the kernel panics with the following
trace. The expected behavior is to terminate the affected process instead
of panicking the kernel, as the x86 Machine Check code can recover from an
in-userspace #MC.

mce: [Hardware Error]: CPU 0: Machine Check Exception: f Bank 3: bd80000000070134
mce: [Hardware Error]: RIP 10:<ffffffff8372f8bc> {memchr_inv+0x4c/0xf0}
mce: [Hardware Error]: TSC afff7bbff88a ADDR 1d301b000 MISC 80 PPIN 1e741e77539027db
mce: [Hardware Error]: PROCESSOR 0:d06d0 TIME 1758093249 SOCKET 0 APIC 0 microcode 80000320
mce: [Hardware Error]: Run the above through 'mcelog --ascii'
mce: [Hardware Error]: Machine check: Data load in unrecoverable area of kernel
Kernel panic - not syncing: Fatal local machine check

The root cause of this panic is that handling a memory failure triggered by
an in-userspace #MC necessitates splitting the THP. The splitting process
employs a mechanism, implemented in try_to_map_unused_to_zeropage(), which
reads the sub-pages of the THP to identify zero-filled pages. However,
reading the sub-pages results in a second in-kernel #MC, occurring before
the initial memory_failure() completes, ultimately leading to a kernel
panic. See the kernel panic call trace on the two #MCs.

First Machine Check occurs // [1]
memory_failure() // [2]
try_to_split_thp_page()
split_huge_page()
split_huge_page_to_list_to_order()
__folio_split() // [3]
remap_page()
remove_migration_ptes()
remove_migration_pte()
try_to_map_unused_to_zeropage()
memchr_inv() // [4]
Second Machine Check occurs // [5]
Kernel panic

[1] Triggered by accessing a hardware-poisoned THP in userspace, which is
typically recoverable by terminating the affected process.

[2] Call folio_set_has_hwpoisoned() before try_to_split_thp_page().

[3] Pass the RMP_USE_SHARED_ZEROPAGE remap flag to remap_page().

[4] Re-access sub-pages of the hw-poisoned THP in the kernel.

[5] Triggered in-kernel, leading to a panic kernel.

In Step[2], memory_failure() sets the has_hwpoisoned flag on the THP,
right before calling try_to_split_thp_page(). Fix this panic by not
passing the RMP_USE_SHARED_ZEROPAGE flag to remap_page() in Step[3]
if the THP has the has_hwpoisoned flag set. This prevents access to
sub-pages of the poisoned THP for zero-page identification, avoiding
a second in-kernel #MC that would cause kernel panic.

[ Qiuxu: Re-worte the commit message. ]

Reported-by: Farrah Chen <farrah.chen@xxxxxxxxx>
Signed-off-by: Andrew Zaborowski <andrew.zaborowski@xxxxxxxxx>
Tested-by: Farrah Chen <farrah.chen@xxxxxxxxx>
Tested-by: Qiuxu Zhuo <qiuxu.zhuo@xxxxxxxxx>
Reviewed-by: Qiuxu Zhuo <qiuxu.zhuo@xxxxxxxxx>
Signed-off-by: Qiuxu Zhuo <qiuxu.zhuo@xxxxxxxxx>
---
mm/huge_memory.c | 3 ++-
mm/memory-failure.c | 6 ++++--
2 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 9c38a95e9f09..1568f0308b90 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3588,6 +3588,7 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
struct list_head *list, bool uniform_split)
{
struct deferred_split *ds_queue = get_deferred_split_queue(folio);
+ bool has_hwpoisoned = folio_test_has_hwpoisoned(folio);
XA_STATE(xas, &folio->mapping->i_pages, folio->index);
struct folio *end_folio = folio_next(folio);
bool is_anon = folio_test_anon(folio);
@@ -3858,7 +3859,7 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
if (nr_shmem_dropped)
shmem_uncharge(mapping->host, nr_shmem_dropped);
- if (!ret && is_anon)
+ if (!ret && is_anon && !has_hwpoisoned)
remap_flags = RMP_USE_SHARED_ZEROPAGE;
remap_page(folio, 1 << order, remap_flags);
diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index df6ee59527dd..3ba6fd4079ab 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -2351,8 +2351,10 @@ int memory_failure(unsigned long pfn, int flags)
* otherwise it may race with THP split.
* And the flag can't be set in get_hwpoison_page() since
* it is called by soft offline too and it is just called
- * for !MF_COUNT_INCREASED. So here seems to be the best
- * place.
+ * for !MF_COUNT_INCREASED.
+ * It also tells split_huge_page() to not bother using
+ * the shared zeropage -- the all-zeros check would
+ * consume the poison. So here seems to be the best place.
*
* Don't need care about the above error handling paths for
* get_hwpoison_page() since they handle either free page

Hm, I wonder if we should actually check in try_to_map_unused_to_zeropage()
whether the page has the hwpoison flag set. Nothing wrong with scanning
non-affected pages.

In thp_underused() we should just skip the folio entirely I guess, so keep
it simple.

So what about something like this:

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 9c38a95e9f091..d4109fd7fa1f2 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -4121,6 +4121,9 @@ static bool thp_underused(struct folio *folio)
if (khugepaged_max_ptes_none == HPAGE_PMD_NR - 1)
return false;
+ folio_contain_hwpoisoned_page(folio)
+ return false;
+
for (i = 0; i < folio_nr_pages(folio); i++) {
kaddr = kmap_local_folio(folio, i * PAGE_SIZE);
if (!memchr_inv(kaddr, 0, PAGE_SIZE)) {
diff --git a/mm/migrate.c b/mm/migrate.c
index 9e5ef39ce73af..393fc2ffc96e5 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -305,8 +305,9 @@ static bool try_to_map_unused_to_zeropage(struct page_vma_mapped_walk *pvmw,
pte_t newpte;
void *addr;
- if (PageCompound(page))
+ if (PageCompound(page) || PageHWPoison(page))
return false;
+
VM_BUG_ON_PAGE(!PageAnon(page), page);
VM_BUG_ON_PAGE(!PageLocked(page), page);
VM_BUG_ON_PAGE(pte_present(ptep_get(pvmw->pte)), page);


--
Cheers

David / dhildenb