RE: [PATCH 1/1] mm: prevent poison consumption when splitting THP
From: Zhuo, Qiuxu
Date: Mon Sep 29 2025 - 09:52:41 EST
Hi David,
> From: David Hildenbrand <david@xxxxxxxxxx>
> [...]
> > --- a/mm/memory-failure.c
> > +++ b/mm/memory-failure.c
> > @@ -2351,8 +2351,10 @@ int memory_failure(unsigned long pfn, int flags)
> > * otherwise it may race with THP split.
> > * And the flag can't be set in get_hwpoison_page() since
> > * it is called by soft offline too and it is just called
> > - * for !MF_COUNT_INCREASED. So here seems to be the best
> > - * place.
> > + * for !MF_COUNT_INCREASED.
> > + * It also tells split_huge_page() to not bother using
> > + * the shared zeropage -- the all-zeros check would
> > + * consume the poison. So here seems to be the best place.
> > *
> > * Don't need care about the above error handling paths for
> > * get_hwpoison_page() since they handle either free page
>
> Hm, I wonder if we should actually check in
> try_to_map_unused_to_zeropage() whether the page has the hwpoison flag
> set. Nothing wrong with scanning non-affected pages.
>
Good point about continuing to scan non-affected pages for possible zeropage mapping.
> In thp_underused() we should just skip the folio entirely I guess, so keep it
> simple.
>
> So what about something like this:
>
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c index
> 9c38a95e9f091..d4109fd7fa1f2 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -4121,6 +4121,9 @@ static bool thp_underused(struct folio *folio)
> if (khugepaged_max_ptes_none == HPAGE_PMD_NR - 1)
> return false;
>
> + folio_contain_hwpoisoned_page(folio)
Typo here 😊?
if (folio_contain_hwpoisoned_page(folio))
> + return false;
> +
> for (i = 0; i < folio_nr_pages(folio); i++) {
> kaddr = kmap_local_folio(folio, i * PAGE_SIZE);
> if (!memchr_inv(kaddr, 0, PAGE_SIZE)) { diff --git a/mm/migrate.c
> b/mm/migrate.c index 9e5ef39ce73af..393fc2ffc96e5 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -305,8 +305,9 @@ static bool try_to_map_unused_to_zeropage(struct
> page_vma_mapped_walk *pvmw,
> pte_t newpte;
> void *addr;
>
> - if (PageCompound(page))
> + if (PageCompound(page) || PageHWPoison(page))
> return false;
> +
> VM_BUG_ON_PAGE(!PageAnon(page), page);
> VM_BUG_ON_PAGE(!PageLocked(page), page);
> VM_BUG_ON_PAGE(pte_present(ptep_get(pvmw->pte)), page);
>
I tested this diff and it works well.
If there are no objections, I'll use this diff for v2.
Thanks David.
-Qiuxu