[RFC PATCH 6/6] x86/mm/tlb: Optimize local TLB flushes

From: Nadav Amit
Date: Sat May 25 2019 - 04:25:26 EST


While the updated smp infrastructure is capable of running a function on
a single local core, it is not optimized for this case. The multiple
function calls and the indirect branch introduce some overhead, making
local TLB flushes slower than they were before the recent changes.

Before calling the SMP infrastructure, check if only a local TLB flush
is needed to restore the lost performance in this common case. This
requires to check mm_cpumask() another time, but unless this mask is
updated very frequently, this should impact performance negatively.

Cc: Dave Hansen <dave.hansen@xxxxxxxxxxxxxxx>
Cc: Andy Lutomirski <luto@xxxxxxxxxx>
Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
Cc: Ingo Molnar <mingo@xxxxxxxxxx>
Cc: Borislav Petkov <bp@xxxxxxxxx>
Cc: x86@xxxxxxxxxx
Signed-off-by: Nadav Amit <namit@xxxxxxxxxx>
---
arch/x86/mm/tlb.c | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 0ec2bfca7581..3f3f983e224e 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -823,8 +823,12 @@ static void flush_tlb_on_cpus(const cpumask_t *cpumask,
const struct flush_tlb_info *info)
{
int this_cpu = smp_processor_id();
+ bool flush_others = false;

- if (static_branch_likely(&flush_tlb_multi_enabled)) {
+ if (cpumask_any_but(cpumask, this_cpu) < nr_cpu_ids)
+ flush_others = true;
+
+ if (static_branch_likely(&flush_tlb_multi_enabled) && flush_others) {
flush_tlb_multi(cpumask, info);
return;
}
@@ -836,7 +840,7 @@ static void flush_tlb_on_cpus(const cpumask_t *cpumask,
local_irq_enable();
}

- if (cpumask_any_but(cpumask, this_cpu) < nr_cpu_ids)
+ if (flush_others)
flush_tlb_others(cpumask, info);
}

--
2.20.1