Re: question about RCU dynticks_nesting
From: Paul E. McKenney
Date: Mon May 04 2015 - 16:02:42 EST
On Mon, May 04, 2015 at 03:39:25PM -0400, Rik van Riel wrote:
> On 05/04/2015 02:39 PM, Paul E. McKenney wrote:
> > On Mon, May 04, 2015 at 11:59:05AM -0400, Rik van Riel wrote:
>
> >> In fact, would we be able to simply use tsk->rcu_read_lock_nesting
> >> as an indicator of whether or not we should bother waiting on that
> >> task or CPU when doing synchronize_rcu?
> >
> > Depends on exactly what you are asking. If you are asking if I could add
> > a few more checks to preemptible RCU and speed up grace-period detection
> > in a number of cases, the answer is very likely "yes". This is on my
> > list, but not particularly high priority. If you are asking whether
> > CPU 0 could access ->rcu_read_lock_nesting of some task running on
> > some other CPU, in theory, the answer is "yes", but in practice that
> > would require putting full memory barriers in both rcu_read_lock()
> > and rcu_read_unlock(), so the real answer is "no".
> >
> > Or am I missing your point?
>
> The main question is "how can we greatly reduce the overhead
> of nohz_full, by simplifying the RCU extended quiescent state
> code called in the syscall fast path, and maybe piggyback on
> that to do time accounting for remote CPUs?"
>
> Your memory barrier answer above makes it clear we will still
> want to do the RCU stuff at syscall entry & exit time, at least
> on x86, where we already have automatic and implicit memory
> barriers.
We do need to keep in mind that x86's automatic and implicit memory
barriers do not order prior stores against later loads.
Hmmm... But didn't earlier performance measurements show that the bulk of
the overhead was the delta-time computations rather than RCU accounting?
Thanx, Paul
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/