Re: sem_lock() vs qspinlocks

From: Linus Torvalds
Date: Fri May 20 2016 - 17:44:38 EST


On Fri, May 20, 2016 at 2:06 PM, Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote:
>>
>> See for example "ipc_smp_acquire__after_spin_is_unlocked()", which has
>> a big comment atop of it that now becomes nonsensical with this patch.
>
> Not quite; we still need that I think.

I think so too, but it's the *comment* that is nonsensical.

The comment says that "spin_unlock_wait() and !spin_is_locked() are
not memory barriers", and clearly now those instructions *are* memory
barriers with your patch.

However, the semaphore code wants a memory barrier after the _read_ in
the spin_unlocked_wait(), which it doesn't get.

So that is part of why I don't like the "hide memory barriers inside
the implementation".

Because once the operations aren't atomic (exactly like the spinlock
is now no longer atomic on x86: it's a separate read-with-acquire
followed by an unordered store for the queued case), the barrier
semantics within such an operation get very screwy. There may be
barriers, but they aren't barriers to *everything*, they are just
barriers to part of the non-atomic operation.

If we were to make the synchronization explicit, we'd still have to
deal with all the subtle semantics, but now the subtle semantics would
at least be *explicit*. And it would make it much easier to explain
the barriers in that ipc semaphore code.

>> Now, I'd take Peter's patch as-is, because I don't think any of this
>> matters from a *performance* standpoint, and Peter's patch is much
>> smaller and simpler.
>
> I would suggest you do this and also mark it for stable v4.2 and later.

Oh, I definitely agree on the stable part, and yes, the "splt things
up" model should come later if people agree that it's a good thing.

Should I take the patch as-is, or should I just wait for a pull
request from the locking tree? Either is ok by me.

Linus