On Fri, 2014-08-08 at 13:21 -0700, Davidlohr Bueso wrote:
On Fri, 2014-08-08 at 12:50 -0700, Jason Low wrote:Right, though if an optimistic spinner gets the lock, it would set
This is a good point, but I think we are safe because we do not rely on__visible __used noinlineWould we need the mutex lock count to eventually get set to a negative
@@ -730,6 +744,23 @@ __mutex_unlock_common_slowpath(struct mutex *lock, int nested)
if (__mutex_slowpath_needs_to_unlock())
atomic_set(&lock->count, 1);
+/*
+ * Skipping the mutex_has_owner() check when DEBUG, allows us to
+ * avoid taking the wait_lock in order to do not call mutex_release()
+ * and debug_mutex_unlock() when !DEBUG. This can otherwise result in
+ * deadlocks when another task enters the lock's slowpath in mutex_lock().
+ */
+#ifndef CONFIG_DEBUG_MUTEXES
+ /*
+ * Abort the wakeup operation if there is an another mutex owner, as the
+ * lock was stolen. mutex_unlock() should have cleared the owner field
+ * before calling this function. If that field is now set, another task
+ * must have acquired the mutex.
+ */
+ if (mutex_has_owner(lock))
+ return;
value if there are waiters? An optimistic spinner can get the lock and
set lock->count to 0. Then the lock count might remain 0 since a waiter
might not get waken up here to try-lock and set lock->count to -1 if it
goes back to sleep in the lock path.
strict dependence between the mutex counter and the wait list. So to see
if there are waiters to wakeup, we do a !list_empty() check, but to
determine the lock state, we rely on the counter.
lock->count to 0. After it is done with its critical region and calls
mutex_unlock(), it would skip the slowpath and not wake up the next
thread either, because it sees that the lock->count is 0. In that case,
there might be a situation where the following mutex_unlock() call would
skip waking up the waiter as there's no call to slowpath.