[PATCH] powerpc/non-smp: Inconditionaly call smp_mb() on switch_mm

From: Christophe Leroy
Date: Mon Jul 05 2021 - 08:01:22 EST


Commit 3ccfebedd8cf ("powerpc, membarrier: Skip memory barrier in
switch_mm()") added some logic to skip the smp_mb() in
switch_mm_irqs_off() before the call to switch_mmu_context().

However, on non SMP smp_mb() is just a compiler barrier and doing
it inconditionaly is simpler than the logic used to check
whether the barrier is needed or not.

After the patch:

00000000 <switch_mm_irqs_off>:
...
c: 7c 04 18 40 cmplw r4,r3
10: 81 24 00 24 lwz r9,36(r4)
14: 91 25 04 c8 stw r9,1224(r5)
18: 4d 82 00 20 beqlr
1c: 48 00 00 00 b 1c <switch_mm_irqs_off+0x1c>
1c: R_PPC_REL24 switch_mmu_context

Before the patch:

00000000 <switch_mm_irqs_off>:
...
c: 7c 04 18 40 cmplw r4,r3
10: 81 24 00 24 lwz r9,36(r4)
14: 91 25 04 c8 stw r9,1224(r5)
18: 4d 82 00 20 beqlr
1c: 81 24 00 28 lwz r9,40(r4)
20: 71 29 00 0a andi. r9,r9,10
24: 40 82 00 34 bne 58 <switch_mm_irqs_off+0x58>
28: 48 00 00 00 b 28 <switch_mm_irqs_off+0x28>
28: R_PPC_REL24 switch_mmu_context
...
58: 2c 03 00 00 cmpwi r3,0
5c: 41 82 ff cc beq 28 <switch_mm_irqs_off+0x28>
60: 48 00 00 00 b 60 <switch_mm_irqs_off+0x60>
60: R_PPC_REL24 switch_mmu_context

Signed-off-by: Christophe Leroy <christophe.leroy@xxxxxxxxxx>
---
arch/powerpc/include/asm/membarrier.h | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/include/asm/membarrier.h b/arch/powerpc/include/asm/membarrier.h
index 6e20bb5c74ea..de7f79157918 100644
--- a/arch/powerpc/include/asm/membarrier.h
+++ b/arch/powerpc/include/asm/membarrier.h
@@ -12,7 +12,8 @@ static inline void membarrier_arch_switch_mm(struct mm_struct *prev,
* when switching from userspace to kernel is not needed after
* store to rq->curr.
*/
- if (likely(!(atomic_read(&next->membarrier_state) &
+ if (IS_ENABLED(CONFIG_SMP) &&
+ likely(!(atomic_read(&next->membarrier_state) &
(MEMBARRIER_STATE_PRIVATE_EXPEDITED |
MEMBARRIER_STATE_GLOBAL_EXPEDITED)) || !prev))
return;
--
2.25.0