[PATCH] hwpoison: Fix race with changing page during offlining

From: Andi Kleen
Date: Thu Jun 26 2014 - 14:23:15 EST


From: Andi Kleen <ak@xxxxxxxxxxxxxxx>

While running the mcelog test suite on 3.14 I hit the following VM_BUG_ON:

soft_offline: 0x56d4: unknown non LRU page type 3ffff800008000
page:ffffea000015b400 count:3 mapcount:2097169 mapping: (null) index:0xffff8800056d7000
page flags: 0x3ffff800004081(locked|slab|head)
------------[ cut here ]------------
kernel BUG at mm/rmap.c:1495!

I think what happened is that a LRU page turned into a slab page in parallel
with offlining. memory_failure initially tests for this case, but doesn't
retest later after the page has been locked.

This patch fixes this race. It also check for the case that the page
changed compound pages.

Unfortunately since it's a race I wasn't able to reproduce later,
so the specific case is not tested.

Cc: Naoya Horiguchi <n-horiguchi@xxxxxxxxxxxxx>
Cc: dave.hansen@xxxxxxxxxxxxxxx
Signed-off-by: Andi Kleen <ak@xxxxxxxxxxxxxxx>
---
mm/memory-failure.c | 16 ++++++++++++++++
1 file changed, 16 insertions(+)

diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index 90002ea..e277726a 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -1143,6 +1143,22 @@ int memory_failure(unsigned long pfn, int trapno, int flags)
lock_page(hpage);

/*
+ * The page could have turned into a non LRU page or
+ * changed compound pages during the locking.
+ * If this happens just bail out.
+ */
+ if (compound_head(p) != hpage) {
+ action_result(pfn, "different compound page after locking", IGNORED);
+ res = -EBUSY;
+ goto out;
+ }
+ if (!PageLRU(hpage)) {
+ action_result(pfn, "non LRU after locking", IGNORED);
+ res = -EBUSY;
+ goto out;
+ }
+
+ /*
* We use page flags to determine what action should be taken, but
* the flags can be modified by the error containment action. One
* example is an mlocked page, where PG_mlocked is cleared by
--
1.9.3

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/