Re: [PATCH v10 03/19] qspinlock: Add pending bit

From: Waiman Long
Date: Tue May 13 2014 - 15:47:25 EST


On 05/12/2014 11:22 AM, Radim KrÄmÃÅ wrote:
2014-05-07 11:01-0400, Waiman Long:
From: Peter Zijlstra<peterz@xxxxxxxxxxxxx>

Because the qspinlock needs to touch a second cacheline; add a pending
bit and allow a single in-word spinner before we punt to the second
cacheline.
I think there is an unwanted scenario on virtual machines:
1) VCPU sets the pending bit and start spinning.
2) Pending VCPU gets descheduled.
- we have PLE and lock holder isn't running [1]
- the hypervisor randomly preempts us
3) Lock holder unlocks while pending VCPU is waiting in queue.
4) Subsequent lockers will see free lock with set pending bit and will
loop in trylock's 'for (;;)'
- the worst-case is lock starving [2]
- PLE can save us from wasting whole timeslice

Retry threshold is the easiest solution, regardless of its ugliness [4].

Yes, that can be a real issue. Some sort of retry threshold, as you said, should be able to handle it.

BTW, the relevant patch should be 16/19 where the PV spinlock stuff should be discussed. This patch is perfectly fine.

Another minor design flaw is that formerly first VCPU gets appended to
the tail when it decides to queue;
is the performance gain worth it?

Thanks.

Yes, the performance gain is worth it. The primary goal is to be not worse than ticket spinlock in light load situation which is the most common case. This feature is need to achieve that.

-Longman
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/