Re: [PATCH 2/4] locking/qrwlock: Reduce reader/writer to reader lock transfer latency
From: Will Deacon
Date: Tue Jul 07 2015 - 13:27:35 EST
On Tue, Jul 07, 2015 at 03:30:22PM +0100, Waiman Long wrote:
> On 07/07/2015 07:49 AM, Will Deacon wrote:
> > On Tue, Jul 07, 2015 at 12:17:31PM +0100, Peter Zijlstra wrote:
> >> On Tue, Jul 07, 2015 at 10:17:11AM +0100, Will Deacon wrote:
> >>>>> Thinking about it, can we kill _QW_WAITING altogether and set (cmpxchg
> >>>>> from 0) wmode to _QW_LOCKED in the write_lock slowpath, polling (acquire)
> >>>>> rmode until it hits zero?
> >>>> No, this is how we make the lock fair so that an incoming streams of
> >>>> later readers won't block a writer from getting the lock.
> >>> But won't those readers effectively see that the lock is held for write
> >>> (because we set wmode to _QW_LOCKED before the existing reader had drained)
> >>> and therefore fall down the slow-path and get held up on the spinlock?
> >> Yes, that's the entire point. Once there's a writer pending, new readers
> >> should queue too.
> > Agreed. My point was that we can achieve the same result without
> > a separate _QW_WAITING flag afaict.
>
> _QW_WAITING and _QW_LOCKED has different semantics and are necessary for
> the proper handshake between readers and writer. We set _QW_WAITING when
> readers own the lock and the writer is waiting for the readers to go
> away. The _QW_WAITING flag will force new readers to go to queuing while
> the writer is waiting. We set _QW_LOCKED when a writer own the lock and
> it can only be set atomically when no reader is present. Without the
> intermediate _QW_WAITING step, a continuous stream of incoming readers
> (which make the reader count never 0) could deny a writer from getting
> the lock indefinitely.
It's probably best if I try to implement something and we can either pick
holes in the patch or I'll realise why I'm wrong in the process :)
Will
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/