Re: [RFC PATCH v2] memory-barriers: remove smp_mb__after_unlock_lock()

From: Will Deacon
Date: Tue Aug 18 2015 - 04:37:17 EST


On Tue, Aug 18, 2015 at 02:50:55AM +0100, Michael Ellerman wrote:
> On Mon, 2015-08-17 at 09:57 +0100, Will Deacon wrote:
> > On Mon, Aug 17, 2015 at 07:15:01AM +0100, Paul E. McKenney wrote:
> > > On Mon, Aug 17, 2015 at 02:06:07PM +1000, Michael Ellerman wrote:
> > > > On Wed, 2015-08-12 at 08:43 -0700, Paul E. McKenney wrote:
> > > > I thought the end result of this thread was that we didn't *need* to change the
> > > > powerpc lock semantics? Or did I read it wrong?
> > > >
> > > > ie. the docs now say that RELEASE+ACQUIRE is not a full barrier, which is
> > > > consistent with our current implementation.
> > >
> > > That change happened about 1.5 years ago, and I thought that the
> > > current discussion was about reversing it, based in part on the
> > > recent powerpc benchmarks of locking primitives with and without the
> > > sync instruction. But regardless, I clearly cannot remove either the
> > > smp_mb__after_unlock_lock() or the powerpc definition of it to be smp_mb()
> > > if powerpc unlock/lock is not strengthened.
> >
> > Yup. Peter and I would really like to get rid of smp_mb__after_unlock_lock
> > entirely, which would mean strengthening the ppc spinlocks. Moving the
> > barrier primitive into RCU is a good step to prevent more widespread usage
> > of the barrier, but we'd really like to go further if the performance impact
> > is deemed acceptable (which is what this thread is about).
>
> OK, sorry for completely missing the point, too many balls in the air here.

No problem!

> I'll do some benchmarks and see what we come up with.

Thanks, that sounds great. FWIW, there are multiple ways of implementing
the patch (i.e. whether you strengthen lock or unlock). I had a crack at
something here, but it's not tested:

http://marc.info/?l=linux-arch&m=143758379023849&w=2

Will
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/