Re: [PATCH v2 2/2] mm/madvise: allow guard page install/remove under VMA lock

From: Chris Mason

Date: Tue Jan 13 2026 - 17:09:22 EST


On Mon, 10 Nov 2025 17:22:58 +0000 Lorenzo Stoakes <lorenzo.stoakes@xxxxxxxxxx> wrote:

> We only need to keep the page table stable so we can perform this operation
> under the VMA lock. PTE installation is stabilised via the PTE lock.
>
> One caveat is that, if we prepare vma->anon_vma we must hold the mmap read
> lock. We can account for this by adapting the VMA locking logic to
> explicitly check for this case and prevent a VMA lock from being acquired
> should it be the case.
>
> This check is safe, as while we might be raced on anon_vma installation,
> this would simply make the check conservative, there's no way for us to see
> an anon_vma and then for it to be cleared, as doing so requires the
> mmap/VMA write lock.
>
> We abstract the VMA lock validity logic to is_vma_lock_sufficient() for
> this purpose, and add prepares_anon_vma() to abstract the anon_vma logic.
>
> In order to do this we need to have a way of installing page tables
> explicitly for an identified VMA, so we export walk_page_range_vma() in an
> unsafe variant - walk_page_range_vma_unsafe() and use this should the VMA
> read lock be taken.
>
> We additionally update the comments in madvise_guard_install() to more
> accurately reflect the cases in which the logic may be reattempted,
> specifically THP huge pages being present.
>

Hi Lorenzo, one more flagged by AI:

> diff --git a/mm/madvise.c b/mm/madvise.c
> index 7ed5bedb8f8ef..2a165e9beb5bb 100644

[ ... ]

> @@ -1173,8 +1180,13 @@ static long madvise_guard_install(struct madvise_behavior *madv_behavior)
> unsigned long nr_pages = 0;
>
> /* Returns < 0 on error, == 0 if success, > 0 if zap needed. */
> - err = walk_page_range_mm_unsafe(vma->vm_mm, range->start,
> - range->end, &guard_install_walk_ops, &nr_pages);
> + if (madv_behavior->lock_mode == MADVISE_VMA_READ_LOCK)
> + err = walk_page_range_vma_unsafe(madv_behavior->vma,
> + range->start, range->end, &walk_ops,
> + &nr_pages);
> + else
> + err = walk_page_range_mm_unsafe(vma->vm_mm, range->start,
> + range->end, &walk_ops, &nr_pages);
> if (err < 0)
> return err;
>
> @@ -1195,8 +1207,7 @@ static long madvise_guard_install(struct madvise_behavior *madv_behavior)
> }

When err > 0 (zap needed), zap_page_range_single() is called below
regardless of lock_mode. If lock_mode == MADVISE_VMA_READ_LOCK (only VMA
lock is held), and the zap range contains a THP PUD mapping that requires
splitting, the call chain:

madvise_guard_install()
-> zap_page_range_single()
-> zap_page_range_single_batched()
-> unmap_single_vma()
-> unmap_page_range()
-> zap_p4d_range()
-> zap_pud_range()

will hit mmap_assert_locked() in zap_pud_range() (mm/memory.c) when
pud_trans_huge() is true and partial PUD splitting is required.

The assertion will trigger a WARN_ON because only the VMA lock is held,
not the mmap lock.

Is there an additional check needed before calling zap_page_range_single()
when in VMA lock mode, or should this path fall back to mmap lock when THP
is possible?

> @@ -1715,9 +1763,9 @@ static enum madvise_lock_mode get_lock_mode(struct madvise_behavior *madv_behavi
> case MADV_POPULATE_READ:
> case MADV_POPULATE_WRITE:
> case MADV_COLLAPSE:
> + return MADVISE_MMAP_READ_LOCK;
> case MADV_GUARD_INSTALL:
> case MADV_GUARD_REMOVE:
> - return MADVISE_MMAP_READ_LOCK;
> case MADV_DONTNEED:
> case MADV_DONTNEED_LOCKED:
> case MADV_FREE:

This change moves MADV_GUARD_INSTALL to use MADVISE_VMA_READ_LOCK, but
zap_page_range_single() called later in madvise_guard_install() may
require the mmap lock for THP PUD splitting as noted above.