Re: [PATCH] mm: mmu_gather: remove __tlb_reset_range() for force flush
From: Peter Zijlstra
Date: Mon May 13 2019 - 07:29:01 EST
On Mon, May 13, 2019 at 09:21:01AM +0000, Nadav Amit wrote:
> > On May 13, 2019, at 2:12 AM, Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote:
> >> The other thing I was thinking of is trying to detect overlap through
> >> the page-tables themselves, but we have a distinct lack of storage
> >> there.
> >
> > We might just use some state in the pmd, there's still 2 _pt_pad_[12] in
> > struct page to 'use'. So we could come up with some tlb generation
> > scheme that would detect conflict.
>
> It is rather easy to come up with a scheme (and I did similar things) if you
> flush the table while you hold the page-tables lock. But if you batch across
> page-tables it becomes harder.
Yeah; finding that out now. I keep finding holes :/
> Thinking about it while typing, perhaps it is simpler than I think - if you
> need to flush range that runs across more than a single table, you are very
> likely to flush a range of more than 33 entries, so anyhow you are likely to
> do a full TLB flush.
We can't rely on the 33, that x86 specific. Other architectures can have
another (or no) limit on that.
> So perhaps just avoiding the batching if only entries from a single table
> are flushed would be enough.
That's near to what Will suggested initially, just flush the entire
thing when there's a conflict.