Re: sem_lock() vs qspinlocks
From: Peter Zijlstra
Date: Fri May 20 2016 - 11:22:09 EST
On Fri, May 20, 2016 at 10:05:33PM +0800, Boqun Feng wrote:
> On Fri, May 20, 2016 at 01:58:19PM +0200, Peter Zijlstra wrote:
> > On Thu, May 19, 2016 at 10:39:26PM -0700, Davidlohr Bueso wrote:
> > > As such, the following restores the behavior of the ticket locks and 'fixes'
> > > (or hides?) the bug in sems. Naturally incorrect approach:
> > >
> > > @@ -290,7 +290,8 @@ static void sem_wait_array(struct sem_array *sma)
> > >
> > > for (i = 0; i < sma->sem_nsems; i++) {
> > > sem = sma->sem_base + i;
> > > - spin_unlock_wait(&sem->lock);
> > > + while (atomic_read(&sem->lock))
> > > + cpu_relax();
> > > }
> > > ipc_smp_acquire__after_spin_is_unlocked();
> > > }
> >
> > The actual bug is clear_pending_set_locked() not having acquire
> > semantics. And the above 'fixes' things because it will observe the old
> > pending bit or the locked bit, so it doesn't matter if the store
> > flipping them is delayed.
> >
> > The comment in queued_spin_lock_slowpath() above the smp_cond_acquire()
> > states that that acquire is sufficient, but this is incorrect in the
> > face of spin_is_locked()/spin_unlock_wait() usage only looking at the
> > lock byte.
> >
> > The problem is that the clear_pending_set_locked() is an unordered
> > store, therefore this store can be delayed until no later than
> > spin_unlock() (which orders against it due to the address dependency).
> >
> > This opens numerous races; for example:
> >
> > ipc_lock_object(&sma->sem_perm);
> > sem_wait_array(sma);
> >
> > false -> spin_is_locked(&sma->sem_perm.lock)
> >
> > is entirely possible, because sem_wait_array() consists of pure reads,
> > so the store can pass all that, even on x86.
> >
> > The below 'hack' seems to solve the problem.
> >
> > _However_ this also means the atomic_cmpxchg_relaxed() in the locked:
> > branch is equally wrong -- although not visible on x86. And note that
> > atomic_cmpxchg_acquire() would not in fact be sufficient either, since
> > the acquire is on the LOAD not the STORE of the LL/SC.
> >
> > I need a break of sorts, because after twisting my head around the sem
> > code and then the qspinlock code I'm wrecked. I'll try and make a proper
> > patch if people can indeed confirm my thinking here.
> >
>
> I think your analysis is right, however, the problem only exists if we
> have the following use pattern, right?
>
> CPU 0 CPU 1
> ==================== ==================
> spin_lock(A); spin_lock(B);
> spin_unlock_wait(B); spin_unlock_wait(A);
> do_something(); do_something();
More or less yes. The semaphore code is like:
spin_lock(A) spin_lock(B)
spin_unlock_wait(B) spin_is_locked(A)
which shows that both spin_is_locked() and spin_unlock_wait() are in the
same class.
> , which ends up CPU 0 and 1 both running do_something(). And actually
> this can be simply fixed by add smp_mb() between spin_lock() and
> spin_unlock_wait() on both CPU, or add an smp_mb() in spin_unlock_wait()
> as PPC does in 51d7d5205d338 "powerpc: Add smp_mb() to arch_spin_is_locked()".
Right and arm64 does in d86b8da04dfa. Curiously you only fixed
spin_is_locked() and Will only fixed spin_unlock_wait, while AFAIU we
need to have _BOTH_ fixed.
Now looking at the PPC code, spin_unlock_wait() as per
arch/powerpc/lib/locks.c actually does included the extra smp_mb().
> So if relaxed/acquire atomics and clear_pending_set_locked() work fine
> in other situations, a proper fix would be fixing the
> spin_is_locked()/spin_unlock_wait() or their users?
Right; the relaxed stores work fine for the 'regular' mutual exclusive
critical section usage of locks. And yes, I think only the case you
outlined can care about it.
Let me write a patch..