Re: [PATCH] mm: mmu_gather: remove __tlb_reset_range() for force flush

From: Nadav Amit
Date: Mon May 13 2019 - 05:22:37 EST


> On May 13, 2019, at 2:12 AM, Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote:
>
> On Mon, May 13, 2019 at 10:36:06AM +0200, Peter Zijlstra wrote:
>> On Thu, May 09, 2019 at 09:21:35PM +0000, Nadav Amit wrote:
>>> It may be possible to avoid false-positive nesting indications (when the
>>> flushes do not overlap) by creating a new struct mmu_gather_pending, with
>>> something like:
>>>
>>> struct mmu_gather_pending {
>>> u64 start;
>>> u64 end;
>>> struct mmu_gather_pending *next;
>>> }
>>>
>>> tlb_finish_mmu() would then iterate over the mm->mmu_gather_pending
>>> (pointing to the linked list) and find whether there is any overlap. This
>>> would still require synchronization (acquiring a lock when allocating and
>>> deallocating or something fancier).
>>
>> We have an interval_tree for this, and yes, that's how far I got :/
>>
>> The other thing I was thinking of is trying to detect overlap through
>> the page-tables themselves, but we have a distinct lack of storage
>> there.
>
> We might just use some state in the pmd, there's still 2 _pt_pad_[12] in
> struct page to 'use'. So we could come up with some tlb generation
> scheme that would detect conflict.

It is rather easy to come up with a scheme (and I did similar things) if you
flush the table while you hold the page-tables lock. But if you batch across
page-tables it becomes harder.

Thinking about it while typing, perhaps it is simpler than I think - if you
need to flush range that runs across more than a single table, you are very
likely to flush a range of more than 33 entries, so anyhow you are likely to
do a full TLB flush.

So perhaps just avoiding the batching if only entries from a single table
are flushed would be enough.