[PATCH v2 22/28] ARCv2: STAR 9000837815 workaround hardware exclusive transactions livelock

From: Vineet Gupta
Date: Fri Jun 19 2015 - 05:56:00 EST


A quad core SMP build could get into hardware livelock with concurrent
LLOCK/SCOND. Workaround that by adding a PREFETCHW which is serialized by
SCU (System Coherency Unit). It brings the cache line in Exclusive state
and makes others invalidate their lines. This gives enough time for
winner to complete the LLOCK/SCOND, before others can get the line back.

Cc: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx>
Signed-off-by: Vineet Gupta <vgupta@xxxxxxxxxxxx>
---
arch/arc/include/asm/atomic.h | 14 ++++++++++++--
1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/arch/arc/include/asm/atomic.h b/arch/arc/include/asm/atomic.h
index 20b7dc17979e..03484cb4d16d 100644
--- a/arch/arc/include/asm/atomic.h
+++ b/arch/arc/include/asm/atomic.h
@@ -23,13 +23,21 @@

#define atomic_set(v, i) (((v)->counter) = (i))

+#ifdef CONFIG_ISA_ARCV2
+#define PREFETCHW " prefetchw [%1] \n"
+#else
+#define PREFETCHW
+#endif
+
#define ATOMIC_OP(op, c_op, asm_op) \
static inline void atomic_##op(int i, atomic_t *v) \
{ \
unsigned int temp; \
\
__asm__ __volatile__( \
- "1: llock %0, [%1] \n" \
+ "1: \n" \
+ PREFETCHW \
+ " llock %0, [%1] \n" \
" " #asm_op " %0, %0, %2 \n" \
" scond %0, [%1] \n" \
" bnz 1b \n" \
@@ -50,7 +58,9 @@ static inline int atomic_##op##_return(int i, atomic_t *v) \
smp_mb(); \
\
__asm__ __volatile__( \
- "1: llock %0, [%1] \n" \
+ "1: \n" \
+ PREFETCHW \
+ " llock %0, [%1] \n" \
" " #asm_op " %0, %0, %2 \n" \
" scond %0, [%1] \n" \
" bnz 1b \n" \
--
1.9.1

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/