Re: [RFC PATCH] alispinlock: acceleration from lock integration on multi-core platform

From: Peter Zijlstra
Date: Wed Jan 06 2016 - 03:16:54 EST


On Tue, Jan 05, 2016 at 09:42:27PM +0000, One Thousand Gnomes wrote:
> > It suffers the typical problems all those constructs do; namely it
> > wrecks accountability.
>
> That's "government thinking" ;-) - for most real users throughput is
> more important than accountability. With the right API it ought to also
> be compile time switchable.

Its to do with having been involved with -rt. RT wants to do
accountability for such things because of PI and sorts.

> > But here that is compounded by the fact that you inject other people's
> > work into 'your' lock region, thereby bloating lock hold times. Worse,
> > afaict (from a quick reading) there really isn't a bound on the amount
> > of work you inject.
>
> That should be relatively easy to fix but for this kind of lock you
> normally get the big wins from stuff that is only a short amount of
> executing code. The fairness your trade in the cases it is useful should
> be tiny except under extreme load, where the "accountability first"
> behaviour would be to fall over in a heap.
>
> If your "lock" involves a lot of work then it probably should be a work
> queue or not using this kind of locking.

Sure, but the fact that it was not even mentioned/considered doesn't
give me a warm fuzzy feeling.

> > And while its a cute collapse of an MCS lock and lockless list style
> > work queue (MCS after all is a lockless list), saving a few cycles from
> > the naive spinlock+llist implementation of the same thing, I really
> > do not see enough justification for any of this.
>
> I've only personally dealt with such locks in the embedded space but
> there it was a lot more than a few cycles because you go from

Nah, what I meant was that you can do the same callback style construct
with a llist and a spinlock.

> The claim in the original post is 3x performance but doesn't explain
> performance doing what, or which kernel locks were switched and what
> patches were used. I don't find the numbers hard to believe for a big big
> box, but I'd like to see the actual use case patches so it can be benched
> with other workloads and also for latency and the like.

Very much agreed, those claims need to be substantiated with actual
patches using this thing and independently verified.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/