On Tue, Sep 22, 2015 at 04:50:40PM -0400, Waiman Long wrote:
This patch replaces the cmpxchg() and xchg() calls in the nativeSo given recent discussion, all this _release/_acquire stuff is starting
qspinlock code with more relaxed versions of those calls to enable
other architectures to adopt queued spinlocks with less performance
overhead.
@@ -62,7 +63,7 @@ static __always_inline int queued_spin_is_contended(struct qspinlock *lock)
static __always_inline int queued_spin_trylock(struct qspinlock *lock)
{
if (!atomic_read(&lock->val)&&
- (atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL) == 0))
+ (atomic_cmpxchg_acquire(&lock->val, 0, _Q_LOCKED_VAL) == 0))
return 1;
return 0;
}
@@ -77,7 +78,7 @@ static __always_inline void queued_spin_lock(struct qspinlock *lock)
{
u32 val;
- val = atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL);
+ val = atomic_cmpxchg_acquire(&lock->val, 0, _Q_LOCKED_VAL);
if (likely(val == 0))
return;
queued_spin_lock_slowpath(lock, val);
@@ -319,7 +329,7 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
if (val == new)
new |= _Q_PENDING_VAL;
- old = atomic_cmpxchg(&lock->val, val, new);
+ old = atomic_cmpxchg_acquire(&lock->val, val, new);
if (old == val)
break;
to worry me.
So we've not declared if they should be RCsc or RCpc, and given this
patch (and the previous ones) these lock primitives turn into RCpc if
the atomic primitives are RCpc.
So far only the proposed PPC implementation is RCpc -- and their current
spinlock implementation is also RCpc, but that is a point of discussion.
Just saying..
Also, I think we should annotate the control dependencies in these
things.