Re: [RFC PATCH 2/2] mm/mmu_gather: Avoid multiple page walk cache flush

From: Peter Zijlstra
Date: Tue Dec 17 2019 - 03:59:06 EST


On Tue, Dec 17, 2019 at 12:47:13PM +0530, Aneesh Kumar K.V wrote:
> On tlb_finish_mmu() kernel does a tlb flush before mmu gather table invalidate.
> The mmu gather table invalidate depending on kernel config also does another
> TLBI. Avoid the later on tlb_finish_mmu().

That is already avoided, if you look at tlb_flush_mmu_tlbonly() it does
__tlb_range_reset(), which results in ->end = 0, which then triggers the
early exit on the next invocation:

if (!tlb->end)
return;