Re: [PATCH RFC] locking/mutexes: don't spin on owner when wait list is not NULL.

From: Peter Zijlstra
Date: Fri Jan 22 2016 - 05:54:07 EST


On Fri, Jan 22, 2016 at 02:20:19AM -0800, Jason Low wrote:

> --- a/kernel/locking/mutex.c
> +++ b/kernel/locking/mutex.c
> @@ -543,6 +543,8 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
> lock_contended(&lock->dep_map, ip);
>
> for (;;) {
> + bool acquired = false;
> +
> /*
> * Lets try to take the lock again - this is needed even if
> * we get here for the first time (shortly after failing to
> @@ -577,7 +579,12 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
> /* didn't get the lock, go to sleep: */
> spin_unlock_mutex(&lock->wait_lock, flags);
> schedule_preempt_disabled();
> +
> + if (mutex_is_locked(lock))
> + acquired = mutex_optimistic_spin(lock, ww_ctx, use_ww_ctx);
> spin_lock_mutex(&lock->wait_lock, flags);
> + if (acquired)
> + break;
> }
> __set_task_state(task, TASK_RUNNING);

I think the problem here is that mutex_optimistic_spin() leaves the
mutex->count == 0, even though we have waiters (us at the very least).

But this should be easily fixed, since if we acquired, we should be the
one releasing, so there's no race.

So something like so:

if (acquired) {
atomic_set(&mutex->count, -1);
break;
}

Should deal with that -- we'll set it to 0 again a little further down
if the list ends up empty.


There might be other details, but this is the one that stood out.