[PATCH 2/3] x86/mce: Remove explicit smp_rmb() when starting CPUs sync

From: Borislav Petkov
Date: Wed Apr 06 2016 - 04:06:29 EST


From: Davidlohr Bueso <dave@xxxxxxxxxxxx>

mce_start() has an explicit smp_wmb() to serialize writes to global_nwo
and mce_callin. However, atomic_inc_return() implies barriers on both
sides of the call, as such simply rely on this full SMP barrier.

Signed-off-by: Davidlohr Bueso <dbueso@xxxxxxx>
Cc: "H. Peter Anvin" <hpa@xxxxxxxxx>
Cc: Ingo Molnar <mingo@xxxxxxxxxx>
Cc: linux-edac <linux-edac@xxxxxxxxxxxxxxx>
Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
Cc: Tony Luck <tony.luck@xxxxxxxxx>
Cc: x86-ml <x86@xxxxxxxxxx>
Link: http://lkml.kernel.org/r/1458602396-840-1-git-send-email-dave@xxxxxxxxxxxx
Signed-off-by: Borislav Petkov <bp@xxxxxxx>
---
arch/x86/kernel/cpu/mcheck/mce.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kernel/cpu/mcheck/mce.c b/arch/x86/kernel/cpu/mcheck/mce.c
index f0c921b03e42..6b7039c166b8 100644
--- a/arch/x86/kernel/cpu/mcheck/mce.c
+++ b/arch/x86/kernel/cpu/mcheck/mce.c
@@ -830,9 +830,9 @@ static int mce_start(int *no_way_out)

atomic_add(*no_way_out, &global_nwo);
/*
- * global_nwo should be updated before mce_callin
+ * Rely on the implied barrier below, such that global_nwo
+ * is updated before mce_callin.
*/
- smp_wmb();
order = atomic_inc_return(&mce_callin);

/*
--
2.7.3