[PATCH v2 1/2] mm/damon: validate if the pmd entry is present before accessing
From: Baolin Wang
Date: Thu Aug 18 2022 - 03:38:06 EST
The pmd_huge() is used to validate if the pmd entry is mapped by a huge
page, also including the case of non-present (migration or hwpoisoned)
pmd entry on arm64 or x86 architectures. That means the pmd_pfn() can
not get the correct pfn number for the non-present pmd entry, which
will cause damon_get_page() to get an incorrect page struct (also
may be NULL by pfn_to_online_page()) to make the access statistics
incorrect.
Moreover it does not make sense that we still waste time to get the
page of the non-present entry, just treat it as not-accessed and skip it,
that keeps consistent with non-present pte level entry.
Thus adding a pmd entry present validation to fix above issues.
Signed-off-by: Baolin Wang <baolin.wang@xxxxxxxxxxxxxxxxx>
Reviewed-by: SeongJae Park <sj@xxxxxxxxxx>
---
Changes from v1:
- Update the commit message to make it more clear.
- Add reviewed tag from SeongJae.
---
mm/damon/vaddr.c | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/mm/damon/vaddr.c b/mm/damon/vaddr.c
index 3c7b9d6..1d16c6c 100644
--- a/mm/damon/vaddr.c
+++ b/mm/damon/vaddr.c
@@ -304,6 +304,11 @@ static int damon_mkold_pmd_entry(pmd_t *pmd, unsigned long addr,
if (pmd_huge(*pmd)) {
ptl = pmd_lock(walk->mm, pmd);
+ if (!pmd_present(*pmd)) {
+ spin_unlock(ptl);
+ return 0;
+ }
+
if (pmd_huge(*pmd)) {
damon_pmdp_mkold(pmd, walk->mm, addr);
spin_unlock(ptl);
@@ -431,6 +436,11 @@ static int damon_young_pmd_entry(pmd_t *pmd, unsigned long addr,
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
if (pmd_huge(*pmd)) {
ptl = pmd_lock(walk->mm, pmd);
+ if (!pmd_present(*pmd)) {
+ spin_unlock(ptl);
+ return 0;
+ }
+
if (!pmd_huge(*pmd)) {
spin_unlock(ptl);
goto regular_page;
--
1.8.3.1