Re: [RFC] mm: stress-ng --mremap triggers severe lruvec lock contention in populate/unmap paths
From: Hugh Dickins
Date: Tue Apr 07 2026 - 20:35:31 EST
On Tue, 7 Apr 2026, John Hubbard wrote:
> On 4/7/26 1:09 PM, Joseph Salisbury wrote:
> > Hello,
> >
> > I would like to ask for feedback on an MM performance issue triggered by
> > stress-ng's mremap stressor:
> >
> > stress-ng --mremap 8192 --mremap-bytes 4K --timeout 30 --metrics-brief
> >
> > This was first investigated as a possible regression from 0ca0c24e3211
> > ("mm: store zero pages to be swapped out in a bitmap"), but the current
> > evidence suggests that commit is mostly exposing an older problem for
> > this workload rather than directly causing it.
> >
>
> Can you try this out? (Adding Hugh to Cc.)
>
> From: John Hubbard <jhubbard@xxxxxxxxxx>
> Date: Tue, 7 Apr 2026 15:33:47 -0700
> Subject: [PATCH] mm/gup: skip lru_add_drain() for non-locked populate
> X-NVConfidentiality: public
> Cc: John Hubbard <jhubbard@xxxxxxxxxx>
>
> populate_vma_page_range() calls lru_add_drain() unconditionally after
> __get_user_pages(). With high-frequency single-page MAP_POPULATE/munmap
> cycles at high thread counts, this forces a lruvec->lru_lock acquire
> per page, defeating per-CPU folio_batch batching.
>
> The drain was added by commit ece369c7e104 ("mm/munlock: add
> lru_add_drain() to fix memcg_stat_test") for VM_LOCKED populate, where
> unevictable page stats must be accurate after faulting. Non-locked VMAs
> have no such requirement. Skip the drain for them.
>
> Cc: Hugh Dickins <hughd@xxxxxxxxxx>
> Signed-off-by: John Hubbard <jhubbard@xxxxxxxxxx>
Thanks for the Cc. I'm not convinced that we should be making such a
change, just to avoid the stress that an avowed stresstest is showing;
but can let others debate that - and, need it be said, I have no
problem with Joseph trying your patch.
I tend to stand by my comment in that commit, that it's not just for
VM_LOCKED: I believe it's in everyone's interest that a bulk faulting
interface like populate_vma_page_range() or faultin_vma_page_range()
should drain its local pagevecs at the end, to save others sometimes
needing the much more expensive lru_add_drain_all().
But lru_add_drain() and lru_add_drain_all(): there's so much to be
said and agonized over there They've distressed me for years, and
are a hot topic for us at present. But I won't be able to contribute
more on that subject, not this week.
Hugh
> ---
> mm/gup.c | 13 ++++++++++++-
> 1 file changed, 12 insertions(+), 1 deletion(-)
>
> diff --git a/mm/gup.c b/mm/gup.c
> index 8e7dc2c6ee73..2dd5de1cb5b9 100644
> --- a/mm/gup.c
> +++ b/mm/gup.c
> @@ -1816,6 +1816,7 @@ long populate_vma_page_range(struct vm_area_struct *vma,
> struct mm_struct *mm = vma->vm_mm;
> unsigned long nr_pages = (end - start) / PAGE_SIZE;
> int local_locked = 1;
> + bool need_drain;
> int gup_flags;
> long ret;
>
> @@ -1857,9 +1858,19 @@ long populate_vma_page_range(struct vm_area_struct *vma,
> * We made sure addr is within a VMA, so the following will
> * not result in a stack expansion that recurses back here.
> */
> + /*
> + * Read VM_LOCKED before __get_user_pages(), which may drop
> + * mmap_lock when FOLL_UNLOCKABLE is set, after which the vma
> + * must not be accessed. The read is stable: mmap_lock is held
> + * for read here, so mlock() (which needs the write lock)
> + * cannot change VM_LOCKED concurrently.
> + */
> + need_drain = vma->vm_flags & VM_LOCKED;
> +
> ret = __get_user_pages(mm, start, nr_pages, gup_flags,
> NULL, locked ? locked : &local_locked);
> - lru_add_drain();
> + if (need_drain)
> + lru_add_drain();
> return ret;
> }
>
>
> base-commit: 3036cd0d3328220a1858b1ab390be8b562774e8a
> --
> 2.53.0
>
>
> thanks,
> --
> John Hubbard