On Tue, Sep 22, 2015 at 04:50:43PM -0400, Waiman Long wrote:
This patch allows one attempt for the lock waiter to steal the lockChangelog does not explain the implementation, which is subtle enough to
when entering the PV slowpath. This helps to reduce the performance
penalty caused by lock waiter preemption while not having much of
the downsides of a real unfair lock.
warrant a few words.
@@ -417,7 +415,8 @@ queue:That's very much: pv_wait_head_or_lock(), maybe _or_steal() is even
* does not imply a full barrier.
*
*/
- pv_wait_head(lock, node);
+ if (pv_wait_head_and_lock(lock, node, tail))
+ goto release;
better.
while ((val = smp_load_acquire(&lock->val.counter))& _Q_LOCKED_PENDING_MASK)Not sure about removing that, breaks symmetry.
cpu_relax();
@@ -454,7 +453,6 @@ queue:
cpu_relax();
arch_mcs_spin_unlock_contended(&next->locked);
- pv_kick_node(lock, next);
/*This doesn't seem to make any sense.. Its also very much distinct from
+ * Allow one unfair trylock when entering the PV slowpath to reduce the
+ * performance impact of lock waiter preemption (either explicitly via
+ * pv_wait or implicitly via PLE). This function will be called once when
+ * a lock waiter enter the slowpath before being queued.
+ *
+ * A little bit of unfairness here can improve performance without many
+ * of the downsides of a real unfair lock.
+ */
+#define queued_spin_trylock(l) pv_queued_spin_trylock_unfair(l)
+static inline bool pv_queued_spin_trylock_unfair(struct qspinlock *lock)
+{
+ struct __qspinlock *l = (void *)lock;
+
+ if (READ_ONCE(l->locked))
+ return 0;
+ /*
+ * Wait a bit here to ensure that an actively spinning queue head vCPU
+ * has a fair chance of getting the lock.
+ */
+ cpu_relax();
+
+ return cmpxchg(&l->locked, 0, _Q_LOCKED_VAL) == 0;
+}
the rest of the patch and can easily be added in a separate patch with
separate performance numbers to show it does (or does not) make a
difference.