Re: [4.2, Regression] Queued spinlocks cause major XFS performance regression

From: Davidlohr Bueso
Date: Sun Sep 06 2015 - 20:06:08 EST


On Fri, 04 Sep 2015, Peter Zijlstra wrote:

-static inline bool virt_queued_spin_lock(struct qspinlock *lock)
+static inline bool virt_spin_lock(struct qspinlock *lock)

Given that we fall back to the cmpxchg loop even when PARAVIRT is not in the
picture, I believe this function is horribly misnamed.

{
if (!static_cpu_has(X86_FEATURE_HYPERVISOR))
return false;

- while (atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL) != 0)
- cpu_relax();
+ /*
+ * On hypervisors without PARAVIRT_SPINLOCKS support we fall
+ * back to a Test-and-Set spinlock, because fair locks have
+ * horrible lock 'holder' preemption issues.
+ */
+

This comment is also misleading... but if you tuck the whole function
under some PARAVIRT option, it obviously makes sense to just leave as is.
And let native actually _use_ qspinlocks.

+ do {
+ while (atomic_read(&lock->val) != 0)
+ cpu_relax();
+ } while (atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL) != 0);

CCAS to the rescue again.

Thanks,
Davidlohr
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/