Re: [PATCH] a local-timer-free version of RCU
From: Paul E. McKenney
Date: Mon Nov 08 2010 - 14:52:55 EST
On Mon, Nov 08, 2010 at 04:15:38PM +0000, houston.jim@xxxxxxxxxxx wrote:
> Hi Everyone,
>
> I'm sorry started this thread and have not been able to keep up
> with the discussion. I agree that the problems described are real.
Not a problem -- your patch is helpful in any case.
> > > UAS> PEM> o CPU 1 continues in rcu_grace_period_complete(),
> > > UAS> PEM> incorrectly ending the new grace period.
> > > UAS> PEM>
> > > UAS> PEM> Or am I missing something here?
> > > UAS>
> > > UAS> The scenario you describe seems possible. However, it should be easily
> > > UAS> fixed by passing the perceived batch number as another parameter to
> > > UAS> rcu_set_state() and making it part of the cmpxchg. So if the caller
> > > UAS> tries to set state bits on a stale batch number (e.g., batch !=
> > > UAS> rcu_batch), it can be detected.
>
> My thought on how to fix this case is to only hand off the DO_RCU_COMPLETION
> to a single cpu. The rcu_unlock which receives this hand off would clear its
> own bit and then call rcu_poll_other_cpus to complete the process.
Or we could map to TREE_RCU's data structures, with one thread per
leaf rcu_node structure.
> > What is scary with this is that it also changes rcu sched semantics, and users
> > of call_rcu_sched() and synchronize_sched(), who rely on that to do more
> > tricky things than just waiting for rcu_derefence_sched() pointer grace periods,
> > like really wanting for preempt_disable and local_irq_save/disable, those
> > users will be screwed... :-( ...unless we also add relevant rcu_read_lock_sched()
> > for them...
>
> I need to stare at the code and get back up to speed. I expect that the synchronize_sched
> path in my patch is just plain broken.
Again, not a problem -- we have a couple approaches that might work.
That said, additional ideas are always welcome!
Thanx, Paul
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/