Re: [PATCH 3/4] mm/tlb, x86/mm: Support invalidating TLB caches for RCU_TABLE_FREE
From: Nicholas Piggin
Date: Mon Aug 27 2018 - 01:00:21 EST
On Fri, 24 Aug 2018 13:39:53 +0200
Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote:
> On Fri, Aug 24, 2018 at 01:32:14PM +0200, Peter Zijlstra wrote:
> > On Fri, Aug 24, 2018 at 10:47:17AM +0200, Peter Zijlstra wrote:
> > > On Thu, Aug 23, 2018 at 02:39:59PM +0100, Will Deacon wrote:
> > > > The only problem with this approach is that we've lost track of the granule
> > > > size by the point we get to the tlb_flush(), so we can't adjust the stride of
> > > > the TLB invalidations for huge mappings, which actually works nicely in the
> > > > synchronous case (e.g. we perform a single invalidation for a 2MB mapping,
> > > > rather than iterating over it at a 4k granule).
> > > >
> > > > One thing we could do is switch to synchronous mode if we detect a change in
> > > > granule (i.e. treat it like a batch failure).
> > >
> > > We could use tlb_start_vma() to track that, I think. Shouldn't be too
> > > hard.
> > Hurm.. look at commit:
> > e77b0852b551 ("mm/mmu_gather: track page size with mmu gather and force flush if page size change")
> Ah, good, it seems that already got cleaned up a lot. But it all moved
> into the power code.. blergh.
I lost track of what the problem is here?
For powerpc, tlb_start_vma is not the right API to use for this because
it wants to deal with different page sizes within a vma.