On Tue, Jul 07, 2015 at 06:27:18PM +0100, Will Deacon wrote:
On Tue, Jul 07, 2015 at 03:30:22PM +0100, Waiman Long wrote:Hmm, wasn't very enlightening. What's wrong with the following?
On 07/07/2015 07:49 AM, Will Deacon wrote:It's probably best if I try to implement something and we can either pick
On Tue, Jul 07, 2015 at 12:17:31PM +0100, Peter Zijlstra wrote:_QW_WAITING and _QW_LOCKED has different semantics and are necessary for
On Tue, Jul 07, 2015 at 10:17:11AM +0100, Will Deacon wrote:Agreed. My point was that we can achieve the same result without
Yes, that's the entire point. Once there's a writer pending, new readersBut won't those readers effectively see that the lock is held for writeThinking about it, can we kill _QW_WAITING altogether and set (cmpxchgNo, this is how we make the lock fair so that an incoming streams of
from 0) wmode to _QW_LOCKED in the write_lock slowpath, polling (acquire)
rmode until it hits zero?
later readers won't block a writer from getting the lock.
(because we set wmode to _QW_LOCKED before the existing reader had drained)
and therefore fall down the slow-path and get held up on the spinlock?
should queue too.
a separate _QW_WAITING flag afaict.
the proper handshake between readers and writer. We set _QW_WAITING when
readers own the lock and the writer is waiting for the readers to go
away. The _QW_WAITING flag will force new readers to go to queuing while
the writer is waiting. We set _QW_LOCKED when a writer own the lock and
it can only be set atomically when no reader is present. Without the
intermediate _QW_WAITING step, a continuous stream of incoming readers
(which make the reader count never 0) could deny a writer from getting
the lock indefinitely.
holes in the patch or I'll realise why I'm wrong in the process :)
Will
--->8
diff --git a/include/asm-generic/qrwlock.h b/include/asm-generic/qrwlock.h
index deb9e8b0eb9e..be8dc5c6fdbd 100644
--- a/include/asm-generic/qrwlock.h
+++ b/include/asm-generic/qrwlock.h
@@ -27,7 +27,6 @@
/*
* Writer states& reader shift and bias
*/
-#define _QW_WAITING 1 /* A writer is waiting */
#define _QW_LOCKED 0xff /* A writer holds the lock */
#define _QW_WMASK 0xff /* Writer mask */
#define _QR_SHIFT 8 /* Reader count shift */
diff --git a/kernel/locking/qrwlock.c b/kernel/locking/qrwlock.c
index 9f644933f6d4..4006aa1fbd0b 100644
--- a/kernel/locking/qrwlock.c
+++ b/kernel/locking/qrwlock.c
@@ -127,28 +127,23 @@ void queued_write_lock_slowpath(struct qrwlock *lock)
}
/*
- * Set the waiting flag to notify readers that a writer is pending,
- * or wait for a previous writer to go away.
+ * Wait for a previous writer to go away, then set the locked
+ * flag to notify future readers/writers that we are pending.
*/
for (;;) {
struct __qrwlock *l = (struct __qrwlock *)lock;
if (!READ_ONCE(l->wmode)&&
- (cmpxchg(&l->wmode, 0, _QW_WAITING) == 0))
+ (cmpxchg(&l->wmode, 0, _QW_LOCKED) == 0))
break;
cpu_relax_lowlatency();
}
- /* When no more readers, set the locked flag */
- for (;;) {
- if ((atomic_read(&lock->cnts) == _QW_WAITING)&&
- (atomic_cmpxchg(&lock->cnts, _QW_WAITING,
- _QW_LOCKED) == _QW_WAITING))
- break;
-
+ /* Wait for the readers to drain */
+ while (smp_load_acquire((u32 *)&lock->cnts)& ~_QW_WMASK)
cpu_relax_lowlatency();
- }
+
unlock:
arch_spin_unlock(&lock->lock);
}