Re: [patch 24/52] fs: dcache reduce d_parent locking
From: Nick Piggin
Date: Thu Jun 24 2010 - 12:05:33 EST
On Thu, Jun 24, 2010 at 08:32:18AM -0700, Paul E. McKenney wrote:
> On Fri, Jun 25, 2010 at 01:07:06AM +1000, Nick Piggin wrote:
> > On Thu, Jun 24, 2010 at 10:44:22AM +0200, Peter Zijlstra wrote:
> > > On Thu, 2010-06-24 at 13:02 +1000, npiggin@xxxxxxx wrote:
> > > > Use RCU property of dcache to simplify locking in some places where we
> > > > take d_parent and d_lock.
> > > >
> > > > Comment: don't need rcu_deref because we take the spinlock and recheck it.
> > >
> > > But does the LOCK barrier imply a DATA DEPENDENCY barrier? (It does on
> > > x86, and the compiler barrier implied by spin_lock() suffices to replace
> > > ACCESS_ONCE()).
> > Well the dependency we care about is from loading the parent pointer
> > to acquiring its spinlock. But we can't possibly have stale data given
> > to the spin lock operation itself because it is a RMW.
> As long as you check for the structure being valid after acquiring the
> lock, I agree. Otherwise, I would be concerned about the following
> sequence of events:
> 1. CPU 0 picks up a pointer to a given data element.
> 2. CPU 1 removes this element from the list, drops any locks that
> it might have, and starts waiting for a grace period to
> 3. CPU 0 acquires the lock, does some operation that would
> be appropriate had the element not been removed, then
> releases the lock.
> 4. After the grace period, CPU 1 frees the element, negating
> CPU 0's hard work.
> The usual approach is to have a "deleted" flag or some such in the
> element that CPU 0 would set when removing the element and that CPU 1
> would check after acquiring the lock. Which you might well already
> be doing! ;-)
Thanks, yep it's done under RCU, and after taking the lock it rechecks
to see that it is still reachable by the same pointer (and if not,
unlocks and retries) so it should be fine.
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/