[PATCH v1 5/6] mm/hwpoison: make some kernel pages handlable
From: Naoya Horiguchi
Date: Sun Jun 13 2021 - 22:12:55 EST
From: Naoya Horiguchi <naoya.horiguchi@xxxxxxx>
HWPoisonHandlable() introduced by patch "mm,hwpoison: fix race with hugetlb
page allocation" filters error events by page type, and only limited events
reach get_page_unless_zero() to avoid race.
Actually this is too restictive because get_hwpoison_page always fails
to take refcount for any types of kernel page, leading to
MF_MSG_KERNEL_HIGH_ORDER. This is not critical (no panic), but less
informative than MF_MSG_SLAB or MF_MSG_PAGETABLE, so extend
HWPoisonHandlable() to some basic types of kernel pages (slab, pgtable,
and reserved pages).
The "handling" for these types are still primitive (just taking refcount
and setting PG_hwpoison) and some more aggressive actions for memory
error containment are possible and wanted. But compared to the older code,
these cases never enter the code block of page locks (note that
page locks is not well-defined on these pages), so it's a little safer
for functions intended for user pages not to be called for kernel pages.
Signed-off-by: Naoya Horiguchi <naoya.horiguchi@xxxxxxx>
---
mm/memory-failure.c | 28 ++++++++++++++++++++--------
1 file changed, 20 insertions(+), 8 deletions(-)
diff --git v5.13-rc5/mm/memory-failure.c v5.13-rc5_patched/mm/memory-failure.c
index b986936e50eb..0d51067f0129 100644
--- v5.13-rc5/mm/memory-failure.c
+++ v5.13-rc5_patched/mm/memory-failure.c
@@ -1113,7 +1113,8 @@ static int page_action(struct page_state *ps, struct page *p,
*/
static inline bool HWPoisonHandlable(struct page *page)
{
- return PageLRU(page) || __PageMovable(page);
+ return PageLRU(page) || __PageMovable(page) ||
+ PageSlab(page) || PageTable(page) || PageReserved(page);
}
static int __get_hwpoison_page(struct page *page)
@@ -1260,12 +1261,6 @@ static bool hwpoison_user_mappings(struct page *p, unsigned long pfn,
struct page *hpage = *hpagep;
bool mlocked = PageMlocked(hpage);
- /*
- * Here we are interested only in user-mapped pages, so skip any
- * other types of pages.
- */
- if (PageReserved(p) || PageSlab(p))
- return true;
if (!(PageLRU(hpage) || PageHuge(p)))
return true;
@@ -1670,7 +1665,10 @@ int memory_failure(unsigned long pfn, int flags)
action_result(pfn, MF_MSG_BUDDY, res);
res = res == MF_RECOVERED ? 0 : -EBUSY;
} else {
- action_result(pfn, MF_MSG_KERNEL_HIGH_ORDER, MF_IGNORED);
+ if (PageCompound(p))
+ action_result(pfn, MF_MSG_KERNEL_HIGH_ORDER, MF_IGNORED);
+ else
+ action_result(pfn, MF_MSG_KERNEL, MF_IGNORED);
res = -EBUSY;
}
goto unlock_mutex;
@@ -1681,6 +1679,20 @@ int memory_failure(unsigned long pfn, int flags)
}
}
+ if (PageSlab(p)) {
+ action_result(pfn, MF_MSG_SLAB, MF_IGNORED);
+ res = -EBUSY;
+ goto unlock_mutex;
+ } else if (PageTable(p)) {
+ action_result(pfn, MF_MSG_PAGETABLE, MF_IGNORED);
+ res = -EBUSY;
+ goto unlock_mutex;
+ } else if (PageReserved(p)) {
+ action_result(pfn, MF_MSG_KERNEL, MF_IGNORED);
+ res = -EBUSY;
+ goto unlock_mutex;
+ }
+
if (PageTransHuge(hpage)) {
if (try_to_split_thp_page(p, "Memory Failure") < 0) {
action_result(pfn, MF_MSG_UNSPLIT_THP, MF_IGNORED);
--
2.25.1