Re: [PATCH 17/17] RCU'd vfsmounts
From: Josh Triplett
Date: Fri Oct 04 2013 - 02:03:17 EST
On Thu, Oct 03, 2013 at 10:29:59PM -0700, Paul E. McKenney wrote:
> On Thu, Oct 03, 2013 at 04:28:27PM -0700, Josh Triplett wrote:
> > On Thu, Oct 03, 2013 at 01:52:45PM -0700, Linus Torvalds wrote:
> > > On Thu, Oct 3, 2013 at 1:41 PM, Al Viro <viro@xxxxxxxxxxxxxxxxxx> wrote:
> > > >
> > > > The problem is this:
> > > > A = 1, B = 1
> > > > CPU1:
> > > > A = 0
> > > > <full barrier>
> > > > synchronize_rcu()
> > > > read B
> > > >
> > > > CPU2:
> > > > rcu_read_lock()
> > > > B = 0
> > > > read A
>
> /me scratches his head...
>
> OK, for CPU2 to see 1 from its read from A, the corresponding RCU
> read-side critical section must have started before CPU1 did A=0. This
> means that this same RCU read-side critical section must have started
> before CPU1's synchronize_rcu(), which means that it must complete
> before that synchronize_rcu() returns. Therefore, CPU2's B=0 must
> execute before CPU1's read of B, hence that read of B must return zero.
>
> Conversely, if CPU1's read from B returns 1, we know that CPU2's
> RCU read-side critical section must not have completed until after
> CPU1's synchronize_rcu() returned, which means that the RCU read-side
> critical section must have started after that synchronize_rcu() started,
> so CPU1's assignment to A must also have already happened. Therefore,
> CPU2's read from A must return zero.
Yeah, that makes sense.
I think too much time spent staring at the *implementation* of RCU and
the exciting assumptions it has to make about barriers or memory
operations leaking out of the implementations of the RCU primitives (for
instance, the fun needed to guarantee a memory barrier on all CPUs, or
to safely use non-atomic operations inside RCU itself) makes it entirely
too difficult to look at a perfectly ordinary *use* of RCU primitives
and see the obvious. :)
- Josh Triplett
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/