[PATCH 1/4] spinlock: Document memory barrier rules

From: Manfred Spraul
Date: Sun Aug 28 2016 - 07:56:46 EST


Right now, the spinlock machinery tries to guarantee barriers even for
unorthodox locking cases, which ends up as a constant stream of updates
as the architectures try to support new unorthodox ideas.

The patch proposes to reverse that:
spin_lock is ACQUIRE, spin_unlock is RELEASE.
spin_unlock_wait is also ACQUIRE.
Code that needs further guarantees must use appropriate explicit barriers.

Architectures that can implement some barriers for free can define the
barriers as NOPs.

As the initial step, the patch converts ipc/sem.c to the new defines:
- no more smp_rmb() after spin_unlock_wait(), that is part of
spin_unlock_wait()
- smp_mb__after_spin_lock() instead of a direct smp_mb().

Signed-off-by: Manfred Spraul <manfred@xxxxxxxxxxxxxxxx>
---
Documentation/locking/spinlocks.txt | 5 +++++
include/linux/spinlock.h | 12 ++++++++++++
ipc/sem.c | 16 +---------------
3 files changed, 18 insertions(+), 15 deletions(-)

diff --git a/Documentation/locking/spinlocks.txt b/Documentation/locking/spinlocks.txt
index ff35e40..fc37beb 100644
--- a/Documentation/locking/spinlocks.txt
+++ b/Documentation/locking/spinlocks.txt
@@ -40,6 +40,11 @@ example, internal driver data structures that nobody else ever touches).
touches a shared variable has to agree about the spinlock they want
to use.

+ NOTE! Code that needs stricter memory barriers than ACQUIRE during
+ LOCK and RELEASE during UNLOCK must use appropriate memory barriers
+ such as smp_mb__after_spin_lock().
+ spin_unlock_wait() has ACQUIRE semantics.
+
----

Lesson 2: reader-writer spinlocks.
diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h
index 47dd0ce..d79000e 100644
--- a/include/linux/spinlock.h
+++ b/include/linux/spinlock.h
@@ -130,6 +130,18 @@ do { \
#define smp_mb__before_spinlock() smp_wmb()
#endif

+#ifndef smp_mb__after_spin_lock
+/**
+ * smp_mb__after_spin_lock() - Provide smp_mb() after spin_lock
+ *
+ * spin_lock() provides ACQUIRE semantics regarding reading the lock.
+ * There are no guarantees that the lock write is visible before any read
+ * or write operation within the protected area is performed.
+ * If the lock write must happen first, this function is required.
+ */
+#define smp_mb__after_spin_lock() smp_mb()
+#endif
+
/**
* raw_spin_unlock_wait - wait until the spinlock gets unlocked
* @lock: the spinlock in question.
diff --git a/ipc/sem.c b/ipc/sem.c
index 5e318c5..ac15ab2 100644
--- a/ipc/sem.c
+++ b/ipc/sem.c
@@ -290,14 +290,6 @@ static void complexmode_enter(struct sem_array *sma)
sem = sma->sem_base + i;
spin_unlock_wait(&sem->lock);
}
- /*
- * spin_unlock_wait() is not a memory barriers, it is only a
- * control barrier. The code must pair with spin_unlock(&sem->lock),
- * thus just the control barrier is insufficient.
- *
- * smp_rmb() is sufficient, as writes cannot pass the control barrier.
- */
- smp_rmb();
}

/*
@@ -363,13 +355,7 @@ static inline int sem_lock(struct sem_array *sma, struct sembuf *sops,
*/
spin_lock(&sem->lock);

- /*
- * See 51d7d5205d33
- * ("powerpc: Add smp_mb() to arch_spin_is_locked()"):
- * A full barrier is required: the write of sem->lock
- * must be visible before the read is executed
- */
- smp_mb();
+ smp_mb__after_spin_lock();

if (!smp_load_acquire(&sma->complex_mode)) {
/* fast path successful! */
--
2.5.5