Re: [PATCH v5 tip/core/locking 6/7] locking: Add ansmp_mb__after_unlock_lock() for UNLOCK+LOCK barrier

From: Paul E. McKenney
Date: Tue Dec 10 2013 - 15:12:08 EST


On Tue, Dec 10, 2013 at 09:45:08AM -0800, Josh Triplett wrote:
> On Tue, Dec 10, 2013 at 01:37:26PM +0100, Peter Zijlstra wrote:
> > On Mon, Dec 09, 2013 at 05:28:02PM -0800, Paul E. McKenney wrote:
> > > diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h
> > > index f89da808ce31..abf645799991 100644
> > > --- a/arch/powerpc/include/asm/barrier.h
> > > +++ b/arch/powerpc/include/asm/barrier.h
> > > @@ -84,4 +84,6 @@ do { \
> > > ___p1; \
> > > })
> > >
> > > +#define smp_mb__after_unlock_lock() do { } while (0)
> > > +
> > > #endif /* _ASM_POWERPC_BARRIER_H */
> >
> > Didn't ben said ppc actually violates the current unlock+lock assumtion
> > and therefore this barrier woulnd't actually be a nop on ppc
>
> Or, ppc could fix its lock primitives to preserve the unlock+lock
> assumption, and avoid subtle breakage across half the kernel.

Indeed. However, another motivation for this change was the difficulty
in proving that x86 really provided the equivalent of a full barrier
for the MCS lock handoff case:

http://www.spinics.net/lists/linux-mm/msg65653.html

Thanx, Paul

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/