[PATCH v2 3/3] mm: memory-hotplug: check folio ref count first in do_migrate_rang
From: Wupeng Ma
Date: Thu Jan 16 2025 - 01:25:23 EST
From: Ma Wupeng <mawupeng1@xxxxxxxxxx>
If a folio has an increased reference count, folio_try_get() will acquire
it, perform necessary operations, and then release it. In the case of a
poisoned folio without an elevated reference count (which is unlikely for
memory-failure), folio_try_get() will simply bypass it.
Therefore, relocate the folio_try_get() function, responsible for checking
and acquiring this reference count at first.
Signed-off-by: Ma Wupeng <mawupeng1@xxxxxxxxxx>
---
mm/memory_hotplug.c | 14 ++++----------
1 file changed, 4 insertions(+), 10 deletions(-)
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 2815bd4ea483..3fb75ee185c6 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -1786,6 +1786,9 @@ static void do_migrate_range(unsigned long start_pfn, unsigned long end_pfn)
page = pfn_to_page(pfn);
folio = page_folio(page);
+ if (!folio_try_get(folio))
+ continue;
+
/*
* No reference or lock is held on the folio, so it might
* be modified concurrently (e.g. split). As such,
@@ -1795,12 +1798,6 @@ static void do_migrate_range(unsigned long start_pfn, unsigned long end_pfn)
if (folio_test_large(folio))
pfn = folio_pfn(folio) + folio_nr_pages(folio) - 1;
- /*
- * HWPoison pages have elevated reference counts so the migration would
- * fail on them. It also doesn't make any sense to migrate them in the
- * first place. Still try to unmap such a page in case it is still mapped
- * (keep the unmap as the catch all safety net).
- */
if (folio_test_hwpoison(folio) ||
(folio_test_large(folio) && folio_test_has_hwpoisoned(folio))) {
if (WARN_ON(folio_test_lru(folio)))
@@ -1811,12 +1808,9 @@ static void do_migrate_range(unsigned long start_pfn, unsigned long end_pfn)
folio_unlock(folio);
}
- continue;
+ goto put_folio;
}
- if (!folio_try_get(folio))
- continue;
-
if (unlikely(page_folio(page) != folio))
goto put_folio;
--
2.43.0