[PATCH 2/9] mm/rmap: initialize nr_pages to 1 at loop start in try_to_unmap_one
From: Dev Jain
Date: Tue Mar 10 2026 - 03:35:03 EST
Initialize nr_pages to 1 at the start of the loop, similar to what is being
done in folio_referenced_one(). It may happen that the nr_pages computed
from a previous call to folio_unmap_pte_batch gets reused without again
going through folio_unmap_pte_batch, messing up things. Although, I don't
think there is any bug right now; a bug would have been there, if in the
same instance of a call to try_to_unmap_one, we end up in the
pte_present(pteval) branch, then in the else branch doing pte_clear() for
device-exclusive ptes. This means that a lazyfree folio has some present
entries and some device entries mapping it. Since a pte being
device-exclusive means that a GUP reference on the underlying folio is
held, the lazyfree unmapping path upon witnessing this will abort
try_to_unmap_one.
Signed-off-by: Dev Jain <dev.jain@xxxxxxx>
---
mm/rmap.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/mm/rmap.c b/mm/rmap.c
index 087c9f5b884fe..1fa020edd954a 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1982,7 +1982,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
unsigned long end_addr;
unsigned long pfn;
unsigned long hsz = 0;
- long nr_pages = 1;
+ long nr_pages;
int ptes = 0;
/*
@@ -2019,6 +2019,8 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
mmu_notifier_invalidate_range_start(&range);
while (page_vma_mapped_walk(&pvmw)) {
+ nr_pages = 1;
+
/*
* If the folio is in an mlock()d vma, we must not swap it out.
*/
--
2.34.1