Re: [4.2, Regression] Queued spinlocks cause major XFS performance regression

From: Peter Zijlstra
Date: Fri Sep 04 2015 - 11:30:44 EST

On Fri, Sep 04, 2015 at 08:21:28AM -0700, Linus Torvalds wrote:
> On Fri, Sep 4, 2015 at 8:14 AM, Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote:
> >
> > The reason we chose to revert to a test-and-set is because regular fair
> > locks, like the ticket and the queue thing, have horrible behaviour
> > under vcpu preemption.
> Right. However, with our old ticket locks, that's what we got when you
> didn't ask for paravirt support. No?


> And even ignoring the "implementation was crap" issue, some people may
> well want their kernels to be "bare hardware" kernels even under a
> hypervisor. It may be a slim hypervisor that gives you all the cpus,
> or it may just be a system that is just sufficiently overprovisioned,
> so you don't get vcpu preemption in practice.

Fair enough; I had not considered the slim hypervisor case.

Should I place the virt_spin_lock() thing under CONFIG_PARAVIRT (maybe
even _SPINLOCKS) such that only paravirt enabled kernels when ran on a
hypervisor that does not support paravirt patching (HyperV, VMware,
etc..) revert to the test-and-set?

> But it would be interesting to hear if just fixing the busy-looping to
> not pound the lock with a constant stream of cmpxchg's is already
> sufficient to fix the big picture problem.

Dave replaced the cpu_relax() with a __delay(1) to match what
spinlock-debug does and that fixed things for him.

Of course, it would be good if he can try the proposed patch too.
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at
Please read the FAQ at