On Tue, Feb 18, 2014 at 07:58:49PM -0500, Waiman Long wrote:
On 02/18/2014 04:37 PM, Peter Zijlstra wrote:No; there is no such possibility. Add BUG_ON(idx>=4) and make sure of
On Tue, Feb 18, 2014 at 02:39:31PM -0500, Waiman Long wrote:A task will be in this loop when it is already the head of a queue and is
But in no case should we revert to unfair spinning or stealing. WeAre you referring to the case (qcode != my_qcode)? This condition will be+ /*Why is this an option at all?
+ * At the head of the wait queue now
+ */
+ while (true) {
+ u32 qcode;
+ int retval;
+
+ retval = queue_get_lock_qcode(lock,&qcode, my_qcode);
+ if (retval> 0)
+ ; /* Lock not available yet */
+ else if (retval< 0)
+ /* Lock taken, can release the node& return */
+ goto release_node;
+ else if (qcode != my_qcode) {
+ /*
+ * Just get the lock with other spinners waiting
+ * in the queue.
+ */
+ if (queue_spin_trylock_unfair(lock))
+ goto notify_next;
true if more than one tasks have queued up.
should always respect the queueing order.
If the lock tail no longer points to us, then there's further waiters
and we should wait for ->next and unlock it -- after we've taken the
lock.
entitled to take the lock. The condition (qcode != my_qcode) is to decide
whether it should just take the lock or take the lock& clear the code
simultaneously. I am a bit cautious to use queue_spin_trylock_unfair() as
there is a possibility that a CPU may run out of the queue node and need to
do unfair busy spinning.
it.
There's simply no more than 4 contexts what can nest at any one time:
task context
softirq context
hardirq context
nmi context
And someone contending a spinlock from NMI context should be shot
anyway.
Getting more nested spinlocks is an absolute hard fail.