On Tue, Dec 17, 2019 at 12:47:12PM +0530, Aneesh Kumar K.V wrote:
Architectures for which we have hardware walkers of Linux page table should
flush TLB on mmu gather batch allocation failures and batch flush. Some
architectures like POWER supports multiple translation modes (hash and radix)
and in the case of POWER only radix translation mode needs the above TLBI.
This is because for hash translation mode kernel wants to avoid this extra
flush since there are no hardware walkers of linux page table. With radix
translation, the hardware also walks linux page table and with that, kernel
needs to make sure to TLB invalidate page walk cache before page table pages are
freed.
Based on changes from Peter Zijlstra <peterz@xxxxxxxxxxxxx>
AFAICT it is all my patch ;-)
Anyway, this commit:
More details in
commit: d86564a2f085 ("mm/tlb, x86/mm: Support invalidating TLB caches for RCU_TABLE_FREE")
states that you do an explicit invalidate in __p*_free_tlb(), which, if
I'm not mistaken is still there:
arch/powerpc/include/asm/nohash/pgalloc.h: tlb_flush_pgtable(tlb, address);
Or am I reading this wrong? I'm thinking you can remove that now.
diff --git a/arch/powerpc/include/asm/tlb.h b/arch/powerpc/include/asm/tlb.h/*
index b2c0be93929d..feea1a09bbce 100644
--- a/arch/powerpc/include/asm/tlb.h
+++ b/arch/powerpc/include/asm/tlb.h
@@ -27,6 +27,10 @@
#define tlb_flush tlb_flush
extern void tlb_flush(struct mmu_gather *tlb);
+#ifdef CONFIG_HAVE_RCU_TABLE_FREE
* PPC-Hash does not use the linux page-tables, so we can avoid
* the TLBI for page-table freeing, PPC-Radix otoh does use the
* page-tables and needs the TLBI.
*/
+#define tlb_needs_table_invalidate() radix_enabled()
+#endif
Also, are you really sure about the !SMP case? Esp. on Radix I'm
thinking that the PWC (page-walk-cache) can give trouble even on UP,
when we get preempted in the middle of mmu_gather. Hmm?
/* Get the generic bits... */
#include <asm-generic/tlb.h>