Re: sem_lock() vs qspinlocks

From: Peter Zijlstra
Date: Fri May 20 2016 - 16:53:20 EST


On Fri, May 20, 2016 at 04:44:19PM -0400, Waiman Long wrote:
> On 05/20/2016 07:58 AM, Peter Zijlstra wrote:
> >On Thu, May 19, 2016 at 10:39:26PM -0700, Davidlohr Bueso wrote:
> >>As such, the following restores the behavior of the ticket locks and 'fixes'
> >>(or hides?) the bug in sems. Naturally incorrect approach:
> >>
> >>@@ -290,7 +290,8 @@ static void sem_wait_array(struct sem_array *sma)
> >>
> >> for (i = 0; i< sma->sem_nsems; i++) {
> >> sem = sma->sem_base + i;
> >>- spin_unlock_wait(&sem->lock);
> >>+ while (atomic_read(&sem->lock))
> >>+ cpu_relax();
> >> }
> >> ipc_smp_acquire__after_spin_is_unlocked();
> >>}
> >The actual bug is clear_pending_set_locked() not having acquire
> >semantics. And the above 'fixes' things because it will observe the old
> >pending bit or the locked bit, so it doesn't matter if the store
> >flipping them is delayed.
>
> The clear_pending_set_locked() is not the only place where the lock is set.
> If there are more than one waiter, the queuing patch will be used instead.
> The set_locked(), which is also an unordered store, will then be used to set
> the lock.

Ah yes. I didn't get that far. One case was enough :-)