Re: sem_lock() vs qspinlocks

From: Manfred Spraul
Date: Sun May 22 2016 - 04:43:19 EST


Hi Peter,


On 05/20/2016 06:04 PM, Peter Zijlstra wrote:
On Fri, May 20, 2016 at 05:21:49PM +0200, Peter Zijlstra wrote:

Let me write a patch..
OK, something like the below then.. lemme go build that and verify that
too fixes things.

---
Subject: locking,qspinlock: Fix spin_is_locked() and spin_unlock_wait()

Similar to commits:

51d7d5205d33 ("powerpc: Add smp_mb() to arch_spin_is_locked()")
d86b8da04dfa ("arm64: spinlock: serialise spin_unlock_wait against concurrent lockers")

qspinlock suffers from the fact that the _Q_LOCKED_VAL store is
unordered inside the ACQUIRE of the lock.

And while this is not a problem for the regular mutual exclusive
critical section usage of spinlocks, it breaks creative locking like:

spin_lock(A) spin_lock(B)
spin_unlock_wait(B) if (!spin_is_locked(A))
do_something() do_something()

In that both CPUs can end up running do_something at the same time,
because our _Q_LOCKED_VAL store can drop past the spin_unlock_wait()
spin_is_locked() loads (even on x86!!).
How would we handle mixed spin_lock()/mutex_lock() code?
For the IPC code, I would like to replace the outer lock with a mutex.
The code only uses spinlocks, because at the time it was written, the mutex code didn't contain a busy wait.
With a mutex, the code would become simpler (all the lock/unlock/kmalloc/relock parts could be removed).

The result would be something like:

mutex_lock(A) spin_lock(B)
spin_unlock_wait(B) if (!mutex_is_locked(A))
do_something() do_something()

--
Manfred