Re: [PATCH v7 4/5] locking/pvqspinlock: Allow 1 lock stealing attempt

From: Waiman Long
Date: Tue Oct 13 2015 - 16:46:04 EST


On 10/13/2015 03:39 PM, Peter Zijlstra wrote:
On Tue, Sep 22, 2015 at 04:50:43PM -0400, Waiman Long wrote:
This patch allows one attempt for the lock waiter to steal the lock
when entering the PV slowpath. This helps to reduce the performance
penalty caused by lock waiter preemption while not having much of
the downsides of a real unfair lock.

Changelog does not explain the implementation, which is subtle enough to
warrant a few words.

Will add more information into the changelog.


@@ -417,7 +415,8 @@ queue:
* does not imply a full barrier.
*
*/
- pv_wait_head(lock, node);
+ if (pv_wait_head_and_lock(lock, node, tail))
+ goto release;
That's very much: pv_wait_head_or_lock(), maybe _or_steal() is even
better.

I am not very good at naming function. Changing it to _or_steal() is fine for me.

while ((val = smp_load_acquire(&lock->val.counter))& _Q_LOCKED_PENDING_MASK)
cpu_relax();

@@ -454,7 +453,6 @@ queue:
cpu_relax();

arch_mcs_spin_unlock_contended(&next->locked);
- pv_kick_node(lock, next);
Not sure about removing that, breaks symmetry.

/*
+ * Allow one unfair trylock when entering the PV slowpath to reduce the
+ * performance impact of lock waiter preemption (either explicitly via
+ * pv_wait or implicitly via PLE). This function will be called once when
+ * a lock waiter enter the slowpath before being queued.
+ *
+ * A little bit of unfairness here can improve performance without many
+ * of the downsides of a real unfair lock.
+ */
+#define queued_spin_trylock(l) pv_queued_spin_trylock_unfair(l)
+static inline bool pv_queued_spin_trylock_unfair(struct qspinlock *lock)
+{
+ struct __qspinlock *l = (void *)lock;
+
+ if (READ_ONCE(l->locked))
+ return 0;
+ /*
+ * Wait a bit here to ensure that an actively spinning queue head vCPU
+ * has a fair chance of getting the lock.
+ */
+ cpu_relax();
+
+ return cmpxchg(&l->locked, 0, _Q_LOCKED_VAL) == 0;
+}
This doesn't seem to make any sense.. Its also very much distinct from
the rest of the patch and can easily be added in a separate patch with
separate performance numbers to show it does (or does not) make a
difference.

If you mean I don't need an extra cpu_relax() here, I can take that out. It was there to make the active queue head vCPU having a higher chance of getting the lock, but it is not essential.

Cheers,
Longman

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/