On Thu, 10 Oct 2019, Peter Zijlstra wrote:I don't know. The new documentation would not have answered my question (is it ok to combine smp_mb__before_atomic() with atomic_relaxed()?). And it copies content already present in atomic_t.txt.
On Thu, Oct 10, 2019 at 02:13:47PM +0200, Manfred Spraul wrote:
Therefore smp_mb__{before,after}_atomic() may be combined with
cmpxchg_relaxed, to form a full memory barrier, on all archs.
Just so.
We might want something like this?
----8<---------------------------------------------------------
From: Davidlohr Bueso <dave@xxxxxxxxxxxx>
Subject: [PATCH] Documentation/memory-barriers.txt: Mention smp_mb__{before,after}_atomic() and CAS
Explicitly mention possible usages to guarantee serialization even upon
failed cmpxchg (or similar) calls along with smp_mb__{before,after}_atomic().
Signed-off-by: Davidlohr Bueso <dbueso@xxxxxxx>
---
Documentation/memory-barriers.txt | 12 ++++++++++++
1 file changed, 12 insertions(+)
diff --git a/Documentation/memory-barriers.txt b/Documentation/memory-barriers.txt
index 1adbb8a371c7..5d2873d4b442 100644
--- a/Documentation/memory-barriers.txt
+++ b/Documentation/memory-barriers.txt
@@ -1890,6 +1890,18 @@ There are some more advanced barrier functions:
ÂÂÂÂ This makes sure that the death mark on the object is perceived to be set
ÂÂÂÂ *before* the reference counter is decremented.
+ÂÂÂÂ Similarly, these barriers can be used to guarantee serialization for atomic
+ÂÂÂÂ RMW calls on architectures which may not imply memory barriers upon failure.
+
+ÂÂÂ obj->next = NULL;
+ÂÂÂ smp_mb__before_atomic()
+ÂÂÂ if (cmpxchg(&obj->ptr, NULL, val))
+ÂÂÂÂÂÂÂ return;
+
+ÂÂÂÂ This makes sure that the store to the next pointer always has smp_store_mb()
+ÂÂÂÂ semantics. As such, smp_mb__{before,after}_atomic() calls allow optimizing
+ÂÂÂÂ the barrier usage by finer grained serialization.
+
ÂÂÂÂ See Documentation/atomic_{t,bitops}.txt for more information.