[PATCH 3.16.y-ckt 110/183] mm: hwpoison: drop lru_add_drain_all() in __soft_offline_page()
From: Luis Henriques
Date: Fri Mar 06 2015 - 05:28:42 EST
3.16.7-ckt8 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Naoya Horiguchi <n-horiguchi@xxxxxxxxxxxxx>
commit 9ab3b598d2dfbdb0153ffa7e4b1456bbff59a25d upstream.
A race condition starts to be visible in recent mmotm, where a PG_hwpoison
flag is set on a migration source page *before* it's back in buddy page
poo= l.
This is problematic because no page flag is supposed to be set when
freeing (see __free_one_page().) So the user-visible effect of this race
is that it could trigger the BUG_ON() when soft-offlining is called.
The root cause is that we call lru_add_drain_all() to make sure that the
page is in buddy, but that doesn't work because this function just
schedule= s a work item and doesn't wait its completion.
drain_all_pages() does drainin= g directly, so simply dropping
lru_add_drain_all() solves this problem.
Fixes: f15bdfa802bf ("mm/memory-failure.c: fix memory leak in successful soft offlining")
Signed-off-by: Naoya Horiguchi <n-horiguchi@xxxxxxxxxxxxx>
Cc: Andi Kleen <andi@xxxxxxxxxxxxxx>
Cc: Tony Luck <tony.luck@xxxxxxxxx>
Cc: Chen Gong <gong.chen@xxxxxxxxxxxxxxx>
Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
Signed-off-by: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx>
Signed-off-by: Luis Henriques <luis.henriques@xxxxxxxxxxxxx>
---
mm/memory-failure.c | 2 --
1 file changed, 2 deletions(-)
diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index a013bc94ebbe..607a6d62bcab 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -1649,8 +1649,6 @@ static int __soft_offline_page(struct page *page, int flags)
* setting PG_hwpoison.
*/
if (!is_free_buddy_page(page))
- lru_add_drain_all();
- if (!is_free_buddy_page(page))
drain_all_pages();
SetPageHWPoison(page);
if (!is_free_buddy_page(page))
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/