Re: [PATCH RFC] v5 expedited "big hammer" RCU grace periods

From: Paul E. McKenney
Date: Tue May 19 2009 - 12:18:53 EST


On Tue, May 19, 2009 at 02:44:36PM +0200, Ingo Molnar wrote:
>
> * Paul E. McKenney <paulmck@xxxxxxxxxxxxxxxxxx> wrote:
>
> > On Tue, May 19, 2009 at 10:58:25AM +0200, Ingo Molnar wrote:
> > >
> > > * Paul E. McKenney <paulmck@xxxxxxxxxxxxxxxxxx> wrote:
> > >
> > > > On Mon, May 18, 2009 at 05:42:41PM +0200, Ingo Molnar wrote:
> > > > >
> > > > > * Paul E. McKenney <paulmck@xxxxxxxxxxxxxxxxxx> wrote:
> > > > >
> > > > > > > i might be missing something fundamental here, but why not just
> > > > > > > have per CPU helper threads, all on the same waitqueue, and wake
> > > > > > > them up via a single wake_up() call? That would remove the SMP
> > > > > > > cross call (wakeups do immediate cross-calls already).
> > > > > >
> > > > > > My concern with this is that the cache misses accessing all the
> > > > > > processes on this single waitqueue would be serialized, slowing
> > > > > > things down. In contrast, the bitmask that smp_call_function()
> > > > > > traverses delivers on the order of a thousand CPUs' worth of bits
> > > > > > per cache miss. I will give it a try, though.
> > > > >
> > > > > At least if you go via the migration threads, you can queue up
> > > > > requests to them locally. But there's going to be cachemisses
> > > > > _anyway_, since you have to access them all from a single CPU,
> > > > > and then they have to fetch details about what to do, and then
> > > > > have to notify the originator about completion.
> > > >
> > > > Ah, so you are suggesting that I use smp_call_function() to run
> > > > code on each CPU that wakes up that CPU's migration thread? I
> > > > will take a look at this.
> > >
> > > My suggestion was to queue up a dummy 'struct migration_req' up with
> > > it (change migration_req::task == NULL to mean 'nothing') and simply
> > > wake it up using wake_up_process().
> >
> > OK. I was thinking of just using wake_up_process() without the
> > migration_req structure, and unconditionally setting a per-CPU
> > variable from within migration_thread() just before the list_empty()
> > check. In your approach we would need a NULL-pointer check just
> > before the call to __migrate_task().
> >
> > > That will force a quiescent state, without the need for any extra
> > > information, right?
> >
> > Yep!
> >
> > > This is what the scheduler code does, roughly:
> > >
> > > wake_up_process(rq->migration_thread);
> > > wait_for_completion(&req.done);
> > >
> > > and this will always have to perform well. The 'req' could be put
> > > into PER_CPU, and a loop could be done like this:
> > >
> > > for_each_online_cpu(cpu)
> > > wake_up_process(cpu_rq(cpu)->migration_thread);
> > >
> > > for_each_online_cpu(cpu)
> > > wait_for_completion(&per_cpu(req, cpu).done);
> > >
> > > hm?
> >
> > My concern is the linear slowdown for large systems, but this
> > should be OK for modest systems (a few 10s of CPUs). However, I
> > will try it out -- it does not need to be a long-term solution,
> > after all.
>
> I think there is going to be a linear slowdown no matter what -
> because sending that many IPIs is going to be linear. (there are no
> 'broadcast to all' IPIs anymore - on x86 we only have them if all
> physical APIC IDs are 7 or smaller.)

With the current code, agreed. One could imagine making an IPI tree,
so that a given CPU IPIs (say) eight subordinates. Making this work
nice with CPU hotplug would be entertaining, to say the least.

> Also, no matter what scheme we use, the target CPU does have to be
> processed somehow and it does have to signal completion back somehow
> - which generates cachemisses.

One could in theory use a combining tree, so that results filter up,
sort of like they do in rcutree. But given that rcutree already has a
combining tree, I would like to do this part in rcutree.

> I think what probaby matters most is to go simple, and to use
> established kernel primitives - and the above is really typical
> pattern for things like TLB flushes to a process having a presence
> on every physical CPU. Those aspects will be kept reasonably fast
> and balanced on all hardware that matters. (and if not, people will
> notice any TLB flush/shootdown linear slowdowns and will address it)
>
> I could be wrong though ... maybe someone can get some numbers from
> a really large system?

In theory, I have access to a 64-way system. In practice, it is
extremely heavily booked.

I will try your straightforward approach.

Thanx, Paul
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/