[PATCH] ipc/sem.c: Update/correct memory barriers.

From: Manfred Spraul
Date: Sat Feb 28 2015 - 15:36:47 EST


sem_lock() did not properly pair memory barriers:

spin_is_locked() or spin_unlock_wait() are both only control barriers.
The code needs an acquire barrier, otherwise the cpu might perform
read operations before the lock test.

One path did the memory barriers correctly, in the other path the
memory barrier was missing.

The patch:
- defines a new barrier, that defaults to smp_rmb().
- conversion ipc/sem.c to the new barrier.

It's necessary for all kernels that use sem_wait_array()
(i.e.: starting from 3.10)

Open tasks:
- checkpatch.pl gives a warning, I think it is spurious
- Who can take care of adding it to a tree that is heading
for Linus' tree?

Signed-off-by: Manfred Spraul <manfred@xxxxxxxxxxxxxxxx>
Reported-by: Oleg Nesterov <oleg@xxxxxxxxxx>
Cc: <stable@xxxxxxxxxxxxxxx>
---
include/linux/spinlock.h | 10 ++++++++++
ipc/sem.c | 7 ++++++-
2 files changed, 16 insertions(+), 1 deletion(-)

diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h
index 3e18379..c383a9c 100644
--- a/include/linux/spinlock.h
+++ b/include/linux/spinlock.h
@@ -140,6 +140,16 @@ do { \
#define smp_mb__after_unlock_lock() do { } while (0)
#endif

+/*
+ * Place this after a control barrier (such as e.g. a spin_unlock_wait())
+ * to ensure that reads cannot be moved ahead of the control_barrier.
+ * Writes do not need a barrier, they are not speculated and thus cannot
+ * pass the control barrier.
+ */
+#ifndef smp_mb__after_control_barrier
+#define smp_mb__after_control_barrier() smp_rmb()
+#endif
+
/**
* raw_spin_unlock_wait - wait until the spinlock gets unlocked
* @lock: the spinlock in question.
diff --git a/ipc/sem.c b/ipc/sem.c
index 9284211..ea9a989 100644
--- a/ipc/sem.c
+++ b/ipc/sem.c
@@ -267,6 +267,10 @@ static void sem_wait_array(struct sem_array *sma)
if (sma->complex_count) {
/* The thread that increased sma->complex_count waited on
* all sem->lock locks. Thus we don't need to wait again.
+ *
+ * The is also no need for memory barriers: with
+ * complex_count>0, all threads acquire/release
+ * sem_perm.lock.
*/
return;
}
@@ -275,6 +279,7 @@ static void sem_wait_array(struct sem_array *sma)
sem = sma->sem_base + i;
spin_unlock_wait(&sem->lock);
}
+ smp_mb__after_control_barrier();
}

/*
@@ -333,7 +338,7 @@ static inline int sem_lock(struct sem_array *sma, struct sembuf *sops,
* complex_count++;
* spin_unlock(sem_perm.lock);
*/
- smp_rmb();
+ smp_mb__after_control_barrier();

/*
* Now repeat the test of complex_count:
--
2.1.0

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/