Re: [PATCH 1/2] mm/mprotect: use mmu_gather

From: Peter Xu
Date: Tue Oct 12 2021 - 19:21:02 EST


On Tue, Oct 12, 2021 at 10:31:45AM -0700, Nadav Amit wrote:
>
>
> > On Oct 12, 2021, at 3:16 AM, Peter Xu <peterx@xxxxxxxxxx> wrote:
> >
> > On Sat, Sep 25, 2021 at 01:54:22PM -0700, Nadav Amit wrote:
> >> @@ -338,25 +344,25 @@ static unsigned long change_protection_range(struct vm_area_struct *vma,
> >> struct mm_struct *mm = vma->vm_mm;
> >> pgd_t *pgd;
> >> unsigned long next;
> >> - unsigned long start = addr;
> >> unsigned long pages = 0;
> >> + struct mmu_gather tlb;
> >>
> >> BUG_ON(addr >= end);
> >> pgd = pgd_offset(mm, addr);
> >> flush_cache_range(vma, addr, end);
> >> inc_tlb_flush_pending(mm);
> >> + tlb_gather_mmu(&tlb, mm);
> >> + tlb_start_vma(&tlb, vma);
> >
> > Pure question:
> >
> > I actually have no idea why tlb_start_vma() is needed here, as protection range
> > can be just a single page, but anyway.. I do see that tlb_start_vma() contains
> > a whole-vma flush_cache_range() when the arch needs it, then does it mean that
> > besides the inc_tlb_flush_pending() to be dropped, so as to the other call to
> > flush_cache_range() above?
>
> Good point.
>
> tlb_start_vma() and tlb_end_vma() are required since some archs do not
> batch TLB flushes across VMAs (e.g., ARM).

Sorry I didn't follow here - as change_protection() is per-vma anyway, so I
don't see why it needs to consider vma crossing.

In all cases, it'll be great if you could add some explanation into commit
message on why we need tlb_{start|end}_vma(), as I think it could not be
obvious to all people.

> I am not sure whether that’s the best behavior for all archs, but I do not
> want to change it.
>
> Anyhow, you make a valid point that the flush_cache_range() should be
> dropped as well. I will do so for next version.

Thanks,

--
Peter Xu