On 09/14/2015 10:00 AM, Peter Zijlstra wrote:
On Fri, Sep 11, 2015 at 02:37:37PM -0400, Waiman Long wrote:
This patch allows one attempt for the lock waiter to steal the lockThis isn't _once_, this is once per 'wakeup'. And note that interrupts
when entering the PV slowpath. This helps to reduce the performance
penalty caused by lock waiter preemption while not having much of
the downsides of a real unfair lock.
@@ -415,8 +458,12 @@ static void pv_wait_head(struct qspinlock *lock, struct mcs_spinlock *node)
for (;; waitcnt++) {
for (loop = SPIN_THRESHOLD; loop; loop--) {
- if (!READ_ONCE(l->locked))
- return;
+ /*
+ * Try to acquire the lock when it is free.
+ */
+ if (!READ_ONCE(l->locked)&&
+ (cmpxchg(&l->locked, 0, _Q_LOCKED_VAL) == 0))
+ goto gotlock;
cpu_relax();
}
unrelated to the kick can equally wake the vCPU up.
Oh! There is a minor bug that I shouldn't need to have a second READ_ONCE() call here.