[RFC PATCH 0/2] Fix a couple of issues with zap_pte_range and MMU gather

From: Will Deacon
Date: Tue Oct 28 2014 - 07:45:19 EST


Hi all,

This patch series attempts to fix a couple of issues I've noticed with
zap_pte_range and the MMU gather code on arm64.

Ths first fix resolves a TLB range truncation, which I found by code
inspection (this is on the batch failure path, which doesn't appear to
be regularly exercised on my system).

For the second fix, I'd really appreciate some comments. The problem is
that the architecture TLB batching implementation may update the start
and end fields of the gather structure, so that they actually cover only
a subset of the initial range set up by tlb_gather_mmu (based on calls
to tlb_remove_tlb_entry). In the force_flush case, zap_pte_range sets
these fields directly, which can result in a negative range if the
architecture has also updated the end address. The patch here uses
min(end, addr) as the end of the first range, which creates a second
range from that address to the end of the region. This results in a
potential over-invalidation on arm64, but I can't think of anything
better without updating (at least) the x86 tlb.h implementation.

Ideally, we'd let the architecture set start/end during the call to
tlb_flush_mmu_tlbonly (arm64 does this already in tlb_flush).

Thoughts?

Will


Will Deacon (2):
zap_pte_range: update addr when forcing flush after TLB batching
faiure
zap_pte_range: fix partial TLB flushing in response to a dirty pte

mm/memory.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)

--
2.1.1

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/