Re: [PATCH v7] mm/gup: check page hwpoison status for memory recovery failures.

From: HORIGUCHI NAOYA(堀口 直也)
Date: Tue Apr 06 2021 - 21:54:37 EST


On Tue, Apr 06, 2021 at 10:41:23AM +0800, Aili Yao wrote:
> When we call get_user_pages() to pin user page in memory, there may be
> hwpoison page, currently, we just handle the normal case that memory
> recovery jod is correctly finished, and we will not return the hwpoison
> page to callers, but for other cases like memory recovery fails and the
> user process related pte is not correctly set invalid, we will still
> return the hwpoison page, and may touch it and lead to panic.
>
> In gup.c, for normal page, after we call follow_page_mask(), we will
> return the related page pointer; or like another hwpoison case with pte
> invalid, it will return NULL. For NULL, we will handle it in if (!page)
> branch. In this patch, we will filter out the hwpoison page in
> follow_page_mask() and return error code for recovery failure cases.
>
> We will check the page hwpoison status as soon as possible and avoid doing
> followed normal procedure and try not to grab related pages.
>
> Changes since v6:
> - Fix wrong page pointer check in follow_trans_huge_pmd();
>
> Signed-off-by: Aili Yao <yaoaili@xxxxxxxxxxxx>
> Cc: David Hildenbrand <david@xxxxxxxxxx>
> Cc: Matthew Wilcox <willy@xxxxxxxxxxxxx>
> Cc: Naoya Horiguchi <naoya.horiguchi@xxxxxxx>
> Cc: Oscar Salvador <osalvador@xxxxxxx>
> Cc: Mike Kravetz <mike.kravetz@xxxxxxxxxx>
> Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
> Cc: stable@xxxxxxxxxxxxxxx
> ---
> mm/gup.c | 27 +++++++++++++++++++++++----
> mm/huge_memory.c | 11 ++++++++---
> mm/hugetlb.c | 8 +++++++-
> mm/internal.h | 13 +++++++++++++
> 4 files changed, 51 insertions(+), 8 deletions(-)

Thank you for the work.

Looking through this patch, the internal of follow_page_mask() is
very complicated so it's not easy to make this hwpoison-aware.
Now I'm getting unsure to judge that this is the best approach.
What actually I imagined might be like below (which is totally
untested, and I'm sorry about my previous misleading comments):

diff --git a/mm/gup.c b/mm/gup.c
index e40579624f10..a60a08fc7668 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -1090,6 +1090,11 @@ static long __get_user_pages(struct mm_struct *mm,
} else if (IS_ERR(page)) {
ret = PTR_ERR(page);
goto out;
+ } else if (gup_flags & FOLL_HWPOISON && PageHWPoison(page)) {
+ if (gup_flags & FOLL_GET)
+ put_page(page);
+ ret = -EHWPOISON;
+ goto out;
}
if (pages) {
pages[i] = page;
@@ -1532,7 +1537,7 @@ struct page *get_dump_page(unsigned long addr)
if (mmap_read_lock_killable(mm))
return NULL;
ret = __get_user_pages_locked(mm, addr, 1, &page, NULL, &locked,
- FOLL_FORCE | FOLL_DUMP | FOLL_GET);
+ FOLL_FORCE | FOLL_DUMP | FOLL_GET | FOLL_HWPOISON);
if (locked)
mmap_read_unlock(mm);
return (ret == 1) ? page : NULL;
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index a86a58ef132d..03c3d3225c0d 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -4949,6 +4949,14 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma,
continue;
}

+ if (flags & FOLL_HWPOISON && PageHWPoison(page)) {
+ vaddr += huge_page_size(h);
+ remainder -= pages_per_huge_page(h);
+ i += pages_per_huge_page(h);
+ spin_unlock(ptl);
+ continue;
+ }
+
refs = min3(pages_per_huge_page(h) - pfn_offset,
(vma->vm_end - vaddr) >> PAGE_SHIFT, remainder);


We can surely say that this change only affects get_user_pages() callers
with FOLL_HWPOISON set, so this should pinpoint the current problem only.
A side note is that the above change on follow_hugetlb_page() has a room of
refactoring to reduce duplicated code.

Could you try to test and complete it?

Thanks,
Naoya Horiguchi