Re: [RFC PATCH 00/16] mm/madvise: batch tlb flushes for MADV_DONTNEED and MADV_FREE
From: Lorenzo Stoakes
Date: Wed Mar 05 2025 - 14:50:26 EST
On Wed, Mar 05, 2025 at 11:46:31AM -0800, Shakeel Butt wrote:
> On Wed, Mar 05, 2025 at 08:19:41PM +0100, David Hildenbrand wrote:
> > On 05.03.25 19:56, Matthew Wilcox wrote:
> > > On Wed, Mar 05, 2025 at 10:15:55AM -0800, SeongJae Park wrote:
> > > > For MADV_DONTNEED[_LOCKED] or MADV_FREE madvise requests, tlb flushes
> > > > can happen for each vma of the given address ranges. Because such tlb
> > > > flushes are for address ranges of same process, doing those in a batch
> > > > is more efficient while still being safe. Modify madvise() and
> > > > process_madvise() entry level code path to do such batched tlb flushes,
> > > > while the internal unmap logics do only gathering of the tlb entries to
> > > > flush.
> > >
> > > Do real applications actually do madvise requests that span multiple
> > > VMAs? It just seems weird to me. Like, each vma comes from a separate
> > > call to mmap [1], so why would it make sense for an application to
> > > call madvise() across a VMA boundary?
> >
> > I had the same question. If this happens in an app, I would assume that a
> > single MADV_DONTNEED call would usually not span multiples VMAs, and if it
> > does, not that many (and that often) that we would really care about it.
>
> IMHO madvise() is just an add-on and the real motivation behind this
> series is your next point.
>
> >
> > OTOH, optimizing tlb flushing when using a vectored MADV_DONTNEED version
> > would make more sense to me. I don't recall if process_madvise() allows for
> > that already, and if it does, is this series primarily tackling optimizing
> > that?
>
> Yes process_madvise() allows that and that is what SJ has benchmarked
> and reported in the cover letter. In addition, we are adding
> process_madvise() support in jemalloc which will land soon.
>
Feels like me adjusting that to allow for batched usage for guard regions
has opened up unexpected avenues, which is really cool to see :)
I presume the usage is intended for PIDFD_SELF usage right?
At some point we need to look at allowing larger iovec size. This was
something I was planning to look at at some point, but my workload is
really overwhelming + that's low priority for me so happy for you guys to
handle that if you want.
Can discuss at lsf if you guys will be there also :)