[PATCH 2/2] ipc/sem.c: Remove duplicated memory barriers.

From: Manfred Spraul
Date: Wed Jul 13 2016 - 01:08:11 EST


With 2c610022711 (locking/qspinlock: Fix spin_unlock_wait() some more),
memory barriers were added into spin_unlock_wait().
Thus another barrier is not required.

And as explained in 055ce0fd1b8 (locking/qspinlock: Add comments),
spin_lock() provides a barrier so that reads within the critical
section cannot happen before the write for the lock is visible.
i.e. spin_lock provides an acquire barrier after the write of the lock
variable, this barrier pairs with the smp_mb() in complexmode_enter().

Please review!
For x86, the patch is safe. But I don't know enough about all archs
that support SMP.

Signed-off-by: Manfred Spraul <manfred@xxxxxxxxxxxxxxxx>
---
ipc/sem.c | 14 --------------
1 file changed, 14 deletions(-)

diff --git a/ipc/sem.c b/ipc/sem.c
index 0da63c8..d7b4212 100644
--- a/ipc/sem.c
+++ b/ipc/sem.c
@@ -291,14 +291,6 @@ static void complexmode_enter(struct sem_array *sma)
sem = sma->sem_base + i;
spin_unlock_wait(&sem->lock);
}
- /*
- * spin_unlock_wait() is not a memory barriers, it is only a
- * control barrier. The code must pair with spin_unlock(&sem->lock),
- * thus just the control barrier is insufficient.
- *
- * smp_rmb() is sufficient, as writes cannot pass the control barrier.
- */
- smp_rmb();
}

/*
@@ -363,12 +355,6 @@ static inline int sem_lock(struct sem_array *sma, struct sembuf *sops,
*/
spin_lock(&sem->lock);

- /*
- * A full barrier is required: the write of sem->lock
- * must be visible before the read is executed
- */
- smp_mb();
-
if (!smp_load_acquire(&sma->complex_mode)) {
/* fast path successful! */
return sops->sem_num;
--
2.5.5