[PATCH -v2 32/33] locking,qrwlock: Employ atomic_fetch_add_acquire()
From: Peter Zijlstra
Date: Tue May 31 2016 - 06:32:22 EST
- Next message: Peter Zijlstra: "[PATCH -v2 21/33] locking,sh: Implement atomic_fetch_{add,sub,and,or,xor}()"
- Previous message: Peter Zijlstra: "[PATCH -v2 24/33] locking,x86: Implement atomic{,64}_fetch_{add,sub,and,or,xor}()"
- In reply to: Peter Zijlstra: "[PATCH -v2 24/33] locking,x86: Implement atomic{,64}_fetch_{add,sub,and,or,xor}()"
- Next in thread: Peter Zijlstra: "[PATCH -v2 21/33] locking,sh: Implement atomic_fetch_{add,sub,and,or,xor}()"
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
The only reason for the current code is to make GCC emit only the
"LOCK XADD" instruction on x86 (and not do a pointless extra ADD on
the result), do so nicer.
Acked-by: Waiman Long <waiman.long@xxxxxxx>
Signed-off-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx>
---
kernel/locking/qrwlock.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/kernel/locking/qrwlock.c
+++ b/kernel/locking/qrwlock.c
@@ -93,7 +93,7 @@ void queued_read_lock_slowpath(struct qr
* that accesses can't leak upwards out of our subsequent critical
* section in the case that the lock is currently held for write.
*/
- cnts = atomic_add_return_acquire(_QR_BIAS, &lock->cnts) - _QR_BIAS;
+ cnts = atomic_fetch_add_acquire(_QR_BIAS, &lock->cnts);
rspin_until_writer_unlock(lock, cnts);
/*
- Next message: Peter Zijlstra: "[PATCH -v2 21/33] locking,sh: Implement atomic_fetch_{add,sub,and,or,xor}()"
- Previous message: Peter Zijlstra: "[PATCH -v2 24/33] locking,x86: Implement atomic{,64}_fetch_{add,sub,and,or,xor}()"
- In reply to: Peter Zijlstra: "[PATCH -v2 24/33] locking,x86: Implement atomic{,64}_fetch_{add,sub,and,or,xor}()"
- Next in thread: Peter Zijlstra: "[PATCH -v2 21/33] locking,sh: Implement atomic_fetch_{add,sub,and,or,xor}()"
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]