Re: [PATCH v4 3/4] locking/qspinlock: Add ARCH_USE_QUEUED_SPINLOCKS_XCHG32

From: Waiman Long
Date: Tue Mar 30 2021 - 10:10:34 EST


On 3/29/21 11:13 PM, Guo Ren wrote:
On Mon, Mar 29, 2021 at 8:50 PM Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote:
On Mon, Mar 29, 2021 at 08:01:41PM +0800, Guo Ren wrote:
u32 a = 0x55aa66bb;
u16 *ptr = &a;

CPU0 CPU1
========= =========
xchg16(ptr, new) while(1)
WRITE_ONCE(*(ptr + 1), x);

When we use lr.w/sc.w implement xchg16, it'll cause CPU0 deadlock.
Then I think your LL/SC is broken.

That also means you really don't want to build super complex locking
primitives on top, because that live-lock will percolate through.
Do you mean the below implementation has live-lock risk?
+static __always_inline u32 xchg_tail(struct qspinlock *lock, u32 tail)
+{
+ u32 old, new, val = atomic_read(&lock->val);
+
+ for (;;) {
+ new = (val & _Q_LOCKED_PENDING_MASK) | tail;
+ old = atomic_cmpxchg(&lock->val, val, new);
+ if (old == val)
+ break;
+
+ val = old;
+ }
+ return old;
+}
If there is a continuous stream of incoming spinlock takers, it is possible that some cpus may have to wait a long time to set the tail right. However, this should only happen on artificial workload. I doubt it will happen with real workload or with limit number of cpus.

Step 1 would be to get your architecute fixed such that it can provide
fwd progress guarantees for LL/SC. Otherwise there's absolutely no point
in building complex systems with it.
Quote Waiman's comment [1] on xchg16 optimization:

"This optimization is needed to make the qspinlock achieve performance
parity with ticket spinlock at light load."

[1] https://lore.kernel.org/kvm/1429901803-29771-6-git-send-email-Waiman.Long@xxxxxx/

So for a non-xhg16 machine:
- ticket-lock for small numbers of CPUs
- qspinlock for large numbers of CPUs

Okay, I'll put all of them into the next patch :P

It is true that qspinlock may not offer much advantage when the number of cpus is small. It shines for systems with many cpus. You may use NR_CPUS to determine if the default should be ticket or qspinlock with user override. To determine the right NR_CPUS threshold, you may need to run on real SMP RISCV systems to find out.

Cheers,
Longman