[PATCH 3.13.y-ckt 096/143] mm: hwpoison: drop lru_add_drain_all() in __soft_offline_page()
From: Kamal Mostafa
Date: Tue Mar 31 2015 - 16:05:24 EST
3.13.11-ckt18 -stable review patch. If anyone has any objections, please let me know.
------------------
From: Naoya Horiguchi <n-horiguchi@xxxxxxxxxxxxx>
commit 9ab3b598d2dfbdb0153ffa7e4b1456bbff59a25d upstream.
A race condition starts to be visible in recent mmotm, where a PG_hwpoison
flag is set on a migration source page *before* it's back in buddy page
poo= l.
This is problematic because no page flag is supposed to be set when
freeing (see __free_one_page().) So the user-visible effect of this race
is that it could trigger the BUG_ON() when soft-offlining is called.
The root cause is that we call lru_add_drain_all() to make sure that the
page is in buddy, but that doesn't work because this function just
schedule= s a work item and doesn't wait its completion.
drain_all_pages() does drainin= g directly, so simply dropping
lru_add_drain_all() solves this problem.
Fixes: f15bdfa802bf ("mm/memory-failure.c: fix memory leak in successful soft offlining")
Signed-off-by: Naoya Horiguchi <n-horiguchi@xxxxxxxxxxxxx>
Cc: Andi Kleen <andi@xxxxxxxxxxxxxx>
Cc: Tony Luck <tony.luck@xxxxxxxxx>
Cc: Chen Gong <gong.chen@xxxxxxxxxxxxxxx>
Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
Signed-off-by: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx>
Signed-off-by: Kamal Mostafa <kamal@xxxxxxxxxxxxx>
---
mm/memory-failure.c | 2 --
1 file changed, 2 deletions(-)
diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index ba1ab14..112be59f 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -1639,8 +1639,6 @@ static int __soft_offline_page(struct page *page, int flags)
* setting PG_hwpoison.
*/
if (!is_free_buddy_page(page))
- lru_add_drain_all();
- if (!is_free_buddy_page(page))
drain_all_pages();
SetPageHWPoison(page);
if (!is_free_buddy_page(page))
--
1.9.1
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/