Re: [RFC PATCH RT] rwsem: The return of multi-reader PI rwsems
From: Peter Zijlstra
Date: Thu Apr 10 2014 - 14:48:11 EST
On Thu, Apr 10, 2014 at 05:03:36PM +0200, Sebastian Andrzej Siewior wrote:
> On 04/10/2014 04:44 PM, Clark Williams wrote:
> > The means of each group of five test runs are:
> >
> > vanilla.log: 1210117 rt.log: 17210953 (14.2 x slower than
> > vanilla) rt-fixes.log: 10062027 (8.3 x slower than vanilla)
> > rt-multi.log: 3179582 (2.x x slower than vanilla)
> >
> >
> > As expected, vanilla kicked RT's butt when hammering on the
> > mmap_sem. But somewhat unexpectedly, your fixups helped quite a
> > bit and the multi+fixups got RT back into being almost
> > respectable.
> >
> > Obviously these are just preliminary results on one piece of h/w
> > but it looks promising.
>
> Is it easy to look at the latency when you have multiple readers and
> and a high prio writer which has to boost all those readers away
> instead just one?
> Or is this something that should not happen for a high prio RT task
> because it has all memory already allocated?
With care it should not happen; it should be relatively straight forward
to avoid all system calls that take mmap_sem for writing.
But yes, the total latency is a concern, that said, that is the very
reason there is a hard limit to the reader concurrency and why this
limit is a tunable.
It defaults to the total number of CPUs in the system, given the default
setup (all CPUs in a single balance domain), this should result in all
CPUs working concurrently on the boosted read sides.
So while there is always some overhead, the worst case should not be
nr_readers * read-hold-time.
Although, with more (unrelated) higher prio threads you can indeed wreck
this. Similarly by partitioning the system and not adjusting the max
reader you also get into trouble.
But then, the above nr_readers * read-hold-time is still an upper bound,
and the entire thing does stay deterministic.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/