[RFC PATCH 4/4] lib: lockref: use relaxed cmpxchg64 variant for lockless updates

From: Will Deacon
Date: Thu Sep 26 2013 - 11:13:56 EST


The 64-bit cmpxchg operation on the lockref is ordered by virtue of
hazarding between the cmpxchg operation and the reference count
manipulation. On weakly ordered memory architectures (such as ARM), it
can be of great benefit to omit the barrier instructions where they are
not needed.

This patch moves the lockless lockref code over to the new
cmpxchg64_relaxed operation, which doesn't provide barrier semantics.

Cc: Waiman Long <Waiman.Long@xxxxxx>
Signed-off-by: Will Deacon <will.deacon@xxxxxxx>
---

So here's a quick stab at allowing the memory barrier semantics to be
avoided on weakly ordered architectures. This helps ARM, but it would be
interesting to see if ia64 gets a boost too (although I've not relaxed
their cmpxchg because there is uapi stuff involved that I wasn't
comfortable refactoring).

lib/lockref.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/lib/lockref.c b/lib/lockref.c
index 677d036..6d896ab 100644
--- a/lib/lockref.c
+++ b/lib/lockref.c
@@ -14,8 +14,9 @@
while (likely(arch_spin_value_unlocked(old.lock.rlock.raw_lock))) { \
struct lockref new = old, prev = old; \
CODE \
- old.lock_count = cmpxchg64(&lockref->lock_count, \
- old.lock_count, new.lock_count); \
+ old.lock_count = cmpxchg64_relaxed(&lockref->lock_count, \
+ old.lock_count, \
+ new.lock_count); \
if (likely(old.lock_count == prev.lock_count)) { \
SUCCESS; \
} \
--
1.8.2.2

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/