[RFC][PATCH 31/31] locking,qrwlock: Employ atomic_fetch_add_acquire()
From: Peter Zijlstra
Date: Fri Apr 22 2016 - 06:02:26 EST
- Next message: Peter Zijlstra: "[RFC][PATCH 04/31] locking,arm: Implement atomic{,64}_fetch_{add,sub,and,andnot,or,xor}{,_relaxed,_acquire,_release}()"
- Previous message: Dan Carpenter: "[patch 2/2] crypto: mxc-scc - fix unwinding in mxc_scc_crypto_register()"
- In reply to: Peter Zijlstra: "[RFC][PATCH 15/31] locking,mips: Implement atomic{,64}_fetch_{add,sub,and,or,xor}()"
- Next in thread: Waiman Long: "Re: [RFC][PATCH 31/31] locking,qrwlock: Employ atomic_fetch_add_acquire()"
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
The only reason for the current code is to make GCC emit only the
"LOCK XADD" instruction on x86 (and not do a pointless extra ADD on
the result), do so nicer.
Signed-off-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx>
---
kernel/locking/qrwlock.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/kernel/locking/qrwlock.c
+++ b/kernel/locking/qrwlock.c
@@ -93,7 +93,7 @@ void queued_read_lock_slowpath(struct qr
* that accesses can't leak upwards out of our subsequent critical
* section in the case that the lock is currently held for write.
*/
- cnts = atomic_add_return_acquire(_QR_BIAS, &lock->cnts) - _QR_BIAS;
+ cnts = atomic_fetch_add_acquire(_QR_BIAS, &lock->cnts);
rspin_until_writer_unlock(lock, cnts);
/*
- Next message: Peter Zijlstra: "[RFC][PATCH 04/31] locking,arm: Implement atomic{,64}_fetch_{add,sub,and,andnot,or,xor}{,_relaxed,_acquire,_release}()"
- Previous message: Dan Carpenter: "[patch 2/2] crypto: mxc-scc - fix unwinding in mxc_scc_crypto_register()"
- In reply to: Peter Zijlstra: "[RFC][PATCH 15/31] locking,mips: Implement atomic{,64}_fetch_{add,sub,and,or,xor}()"
- Next in thread: Waiman Long: "Re: [RFC][PATCH 31/31] locking,qrwlock: Employ atomic_fetch_add_acquire()"
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]