Re: [PATCH v2 10/10] mm/hugetlb: Document why page_vma_mapped_walk() is safe to walk

From: Peter Xu
Date: Thu Dec 08 2022 - 16:07:14 EST


On Thu, Dec 08, 2022 at 02:16:03PM +0100, David Hildenbrand wrote:
> On 07.12.22 21:31, Peter Xu wrote:
> > Taking vma lock here is not needed for now because all potential hugetlb
> > walkers here should have i_mmap_rwsem held. Document the fact.
> >
> > Reviewed-by: Mike Kravetz <mike.kravetz@xxxxxxxxxx>
> > Signed-off-by: Peter Xu <peterx@xxxxxxxxxx>
> > ---
> > mm/page_vma_mapped.c | 10 ++++++++--
> > 1 file changed, 8 insertions(+), 2 deletions(-)
> >
> > diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
> > index e97b2e23bd28..2e59a0419d22 100644
> > --- a/mm/page_vma_mapped.c
> > +++ b/mm/page_vma_mapped.c
> > @@ -168,8 +168,14 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
> > /* The only possible mapping was handled on last iteration */
> > if (pvmw->pte)
> > return not_found(pvmw);
> > -
> > - /* when pud is not present, pte will be NULL */
> > + /*
> > + * NOTE: we don't need explicit lock here to walk the
> > + * hugetlb pgtable because either (1) potential callers of
> > + * hugetlb pvmw currently holds i_mmap_rwsem, or (2) the
> > + * caller will not walk a hugetlb vma (e.g. ksm or uprobe).
> > + * When one day this rule breaks, one will get a warning
> > + * in hugetlb_walk(), and then we'll figure out what to do.
> > + */
> > pvmw->pte = hugetlb_walk(vma, pvmw->address, size);
> > if (!pvmw->pte)
> > return false;
>
> Would it make sense to squash that into the previous commit?

Sure thing.

--
Peter Xu