Re: [4.2, Regression] Queued spinlocks cause major XFS performance regression

From: Peter Zijlstra
Date: Sat Sep 05 2015 - 13:46:00 EST


On Fri, Sep 04, 2015 at 08:58:38AM -0700, Linus Torvalds wrote:
> On Fri, Sep 4, 2015 at 8:30 AM, Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote:
> >> And even ignoring the "implementation was crap" issue, some people may
> >> well want their kernels to be "bare hardware" kernels even under a
> >> hypervisor. It may be a slim hypervisor that gives you all the cpus,
> >> or it may just be a system that is just sufficiently overprovisioned,
> >> so you don't get vcpu preemption in practice.
> >
> > Fair enough; I had not considered the slim hypervisor case.
> >
> > Should I place the virt_spin_lock() thing under CONFIG_PARAVIRT (maybe
> > even _SPINLOCKS) such that only paravirt enabled kernels when ran on a
> > hypervisor that does not support paravirt patching (HyperV, VMware,
> > etc..) revert to the test-and-set?
>
> My gut feel would be to try to match out old paravirt setup, which
> similarly replaced the ticket locks with the test-and-set lock, and
> try to match the situation where that happened?

I'm not sure there was a test-and-set option in 4.1.

Either the hypervisor layer implemented paravirt spinlocks (Xen, KVM)
(and you selected CONFIG_PARAVIRT_SPINLOCKS, which had a fairly large
negative impact on native code), or you got our native locking.

So if you want I can simply remove the whole test-and-set thing, but I'd
rather fix it and put it under one of the PARAVIRT options.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/