Re: [PATCH 3/4] x86,asm: Re-work smp_store_mb()

From: Davidlohr Bueso
Date: Tue Oct 27 2015 - 18:02:11 EST

On Wed, 28 Oct 2015, Linus Torvalds wrote:

On Wed, Oct 28, 2015 at 4:53 AM, Davidlohr Bueso <dave@xxxxxxxxxxxx> wrote:

Note that this might affect callers that could/would rely on the
atomicity semantics, but there are no guarantees of that for
smp_store_mb() mentioned anywhere, plus most archs use this anyway.
Thus we continue to be consistent with the memory-barriers.txt file,
and more importantly, maintain the semantics of the smp_ nature.

So I dislike this patch, mostly because it now makes it obvious that
smp_store_mb() seems to be totally pointless. Every single
implementation is now apparently WRITE_ONCE+smp_mb(), and there are
what, five users of it, so why not then open-code it?

So after having gone through pretty much all of smp_store_mb code, this
is a feeling I also share. However I justified its existence (as opposed
to dropping the call, updating all the callers/documenting the barriers
etc.) to at least encapsulate the store+mb logic, which apparently is a
pattern somewhat needed(?). Also, the name is obviously exactly what its
name implies.

But I have no strong preference either way. Now, if we should keep
smp_store_mb(), it should probably be made generic, instead of having
each arch define it.

But more importantly, is the "WRITE_ONCE()" even necessary? If there
are no atomicity guarantees, then why bother with WRTE_ONCE() either?

Agreed. Hmm, this was introduced by ab3f02fc237 (locking/arch: Add WRITE_ONCE()
to set_mb()), back when atomicity aspects were not clear yet.

So with this patch, the whole thing becomes pointless, I feel. (Ok, so
it may have been pointless before too, but at least before this patch
it generated special code, now it doesn't). So why carry it along at

Ok, unless others are strongly against it, I'll send a series to drop the
call altogether.

To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at
Please read the FAQ at