Re: [PATCH 2/2] smp_call_function: use rwlocks on queues rather than rcu

From: Nick Piggin
Date: Fri Aug 22 2008 - 05:13:10 EST


On Friday 22 August 2008 17:12, Ingo Molnar wrote:
> * Pekka Enberg <penberg@xxxxxxxxxxxxxx> wrote:
> > Hi Ingo,
> >
> > On Fri, Aug 22, 2008 at 9:28 AM, Ingo Molnar <mingo@xxxxxxx> wrote:
> > > * Jeremy Fitzhardinge <jeremy@xxxxxxxx> wrote:
> > >> RCU can only control the lifetime of allocated memory blocks, which
> > >> forces all the call structures to be allocated. This is expensive
> > >> compared to allocating them on the stack, which is the common case for
> > >> synchronous calls.
> > >>
> > >> This patch takes a different approach. Rather than using RCU, the
> > >> queues are managed under rwlocks. Adding or removing from the queue
> > >> requires holding the lock for writing, but multiple CPUs can walk the
> > >> queues to process function calls under read locks. In the common
> > >> case, where the structures are stack allocated, the calling CPU need
> > >> only wait for its call to be done, take the lock for writing and
> > >> remove the call structure.
> > >>
> > >> Lock contention - particularly write vs read - is reduced by using
> > >> multiple queues.
> > >
> > > hm, is there any authorative data on what is cheaper on a big box, a
> > > full-blown MESI cache miss that occurs for every reader in this new
> > > fastpath, or a local SLAB/SLUB allocation+free that occurs with the
> > > current RCU approach?
> >
> > Christoph might have an idea about it.
>
> ... thought of that missing Cc: line entry exactly 1.3 seconds after
> having sent the mail :)
>
> Christoph, any preferences/suggestions?

I think it's just going to be a matter of benchmarking it and seeing.
And small/medium systems are probably more important than huge ones
unless there is a pathological scalability problem with one of the
approaches (which there probably isn't seeing as there is already
locking there).
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/