Re: Preemptable Ticket Spinlock

From: Peter Zijlstra
Date: Mon Apr 22 2013 - 16:55:48 EST


On Mon, 2013-04-22 at 16:46 -0400, Jiannan Ouyang wrote:
> On Mon, Apr 22, 2013 at 4:08 PM, Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote:
>
> >
> > I much prefer the entire series from Jeremy since it maintains the
> > ticket semantics and doesn't degrade the lock to unfair under
> > contention.
> >
> > Now I suppose there's a reason its not been merged yet and I suspect
> > its !paravirt hotpath impact which wasn't rightly justified or somesuch
> > so maybe someone can work on that or so.. dunno.
> >
> >
>
> In my paper, I comparable preemptable-lock and pv_lock on KVM from
> Raghu and Jeremy.

Which pv_lock? The current pv spinlock mess is basically the old unfair
thing. The later patch series I referred to earlier implemented a
paravirt ticket lock, that should perform much better under overcommit.

> Results show that:
> - preemptable-lock improves performance significantly without paravirt support

But completely wrecks our native spinlock implementation so that's not
going to happen of course ;-)

> - preemptable-lock can also be paravirtualized, which outperforms
> pv_lock, especially when overcommited by 3 or more

See above..

> - pv-preemptable-lock has much less performance variance compare to
> pv_lock, because it adapts to preemption within VM,
> other than using rescheduling that increase VM interference

I would say it has a _much_ worse worst case (and thus worse variance)
than the paravirt ticket implementation from Jeremy. While full
paravirt ticket lock results in vcpu scheduling it does maintain
fairness.

If you drop strict fairness you can end up in unbounded starvation
cases and those are very ugly indeed.

> It would still be very interesting to conduct more experiments to
> compare these two, to see if the fairness enforced by pv_lock is
> mandatory, and if preemptable-lock outperforms pv_lock in most cases,
> and how do they work with PLE.

Be more specific, pv_lock as currently upstream is a trainwreck mostly
done because pure ticket spinners and vcpu-preemption are even worse.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/