[PATCH] mm: Fix up unmap desc use on exit_mmap()
From: Liam R. Howlett
Date: Tue Feb 10 2026 - 16:44:46 EST
On exiting mmap, the page table vma limit was set to 0 - ULONG_MAX.
These setting will trigger the WARN_ON_ONCE() because the vma end will
be larger than the page table end (which is set to TASK_SIZE, in this
case).
Adding an unmap_pgtable_init() to initialize the vma range to the user
address limits, as was being used before, will avoid the triggering of
the WARN_ON_ONCE() in free_pgtables().
Comments have been added to the unmap_pgtable_init() in regards to the
arm arch behaviour surrounding the vmas.
Signed-off-by: Liam R. Howlett <Liam.Howlett@xxxxxxxxxx>
---
Andrew,
This is a pretty significant change on the last patch of the series.
Please let me know if you want me to resend the series for this.
The Reviewed-by tags should be dropped, at least.
Reported-by: Chris Mason <clm@xxxxxxxx> (via AI tools)
Fixes: [PATCH v3 11/11] mm: Use unmap_desc struct for freeing page
tables.
mm/memory.c | 8 +++-----
mm/mmap.c | 2 +-
mm/vma.h | 23 +++++++++++++++++++++++
3 files changed, 27 insertions(+), 6 deletions(-)
diff --git a/mm/memory.c b/mm/memory.c
index abb41cb66ced9..befa3cbe5358a 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -391,11 +391,9 @@ void free_pgtables(struct mmu_gather *tlb, struct unmap_desc *unmap)
/*
* Note: USER_PGTABLES_CEILING may be passed as the value of pg_end and
- * may be 0. The underflow here is fine and expected.
- * The vma_end is exclusive, which is fine until we use the mas_ instead
- * of the vma iterators.
- * For freeing the page tables to make sense, the vma_end must be larger
- * than the pg_end, so check that after the potential underflow.
+ * may be 0. Underflow is expected in this case. Otherwise the
+ * pagetable end is exclusive. vma_end is exclusive. The last vma
+ * address should never be larger than the pagetable end.
*/
WARN_ON_ONCE(unmap->vma_end - 1 > unmap->pg_end - 1);
diff --git a/mm/mmap.c b/mm/mmap.c
index 8771b276d63db..a03b7681e13c2 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -1309,7 +1309,7 @@ void exit_mmap(struct mm_struct *mm)
mmap_write_lock(mm);
unmap.mm_wr_locked = true;
mt_clear_in_rcu(&mm->mm_mt);
- vma_iter_set(&vmi, unmap.tree_reset);
+ unmap_pgtable_init(&unmap, &vmi);
free_pgtables(&tlb, &unmap);
tlb_finish_mmu(&tlb);
diff --git a/mm/vma.h b/mm/vma.h
index 83db6beaa985d..d02154c3ceade 100644
--- a/mm/vma.h
+++ b/mm/vma.h
@@ -167,6 +167,10 @@ struct unmap_desc {
bool mm_wr_locked; /* If the mmap write lock is held */
};
+/*
+ * unmap_all_init() - Initialize unmap_desc to remove all vmas, point the
+ * pg_start and pg_end to a safe location.
+ */
static inline void unmap_all_init(struct unmap_desc *unmap,
struct vma_iterator *vmi, struct vm_area_struct *vma)
{
@@ -181,6 +185,25 @@ static inline void unmap_all_init(struct unmap_desc *unmap,
unmap->mm_wr_locked = false;
}
+/*
+ * unmap_pgtable_init() - Initialize unmap_desc to remove all page tables within
+ * the user range.
+ *
+ * ARM can have mappings outside of vmas.
+ * See: e2cdef8c847b4 ("[PATCH] freepgt: free_pgtables from FIRST_USER_ADDRESS")
+ *
+ * ARM LPAE uses page table mappings beyond the USER_PGTABLES_CEILING
+ * See: CONFIG_ARM_LPAE in arch/arm/include/asm/pgtable.h
+ */
+static inline void unmap_pgtable_init(struct unmap_desc *unmap,
+ struct vma_iterator *vmi)
+{
+ vma_iter_set(vmi, unmap->tree_reset);
+ unmap->vma_start = FIRST_USER_ADDRESS;
+ unmap->vma_end = USER_PGTABLES_CEILING;
+ unmap->tree_end = USER_PGTABLES_CEILING;
+}
+
#define UNMAP_STATE(name, _vmi, _vma, _vma_start, _vma_end, _prev, _next) \
struct unmap_desc name = { \
.mas = &(_vmi)->mas, \
--
2.47.3