Re: [PATCH v2 05/11] locking/ww_mutex: Add waiters in stamp order

From: Chris Wilson
Date: Thu Dec 01 2016 - 11:00:46 EST


On Thu, Dec 01, 2016 at 03:06:48PM +0100, Nicolai Hähnle wrote:
> @@ -677,15 +722,25 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
> debug_mutex_lock_common(lock, &waiter);
> debug_mutex_add_waiter(lock, &waiter, task);
>
> - /* add waiting tasks to the end of the waitqueue (FIFO): */
> - list_add_tail(&waiter.list, &lock->wait_list);
> + lock_contended(&lock->dep_map, ip);
> +
> + if (!use_ww_ctx) {
> + /* add waiting tasks to the end of the waitqueue (FIFO): */
> + list_add_tail(&waiter.list, &lock->wait_list);
> + } else {
> + /* Add in stamp order, waking up waiters that must back off. */
> + ret = __ww_mutex_add_waiter(&waiter, lock, ww_ctx);
> + if (ret)
> + goto err_early_backoff;
> +
> + waiter.ww_ctx = ww_ctx;
> + }
> +
> waiter.task = task;

Would an unconditional waiter.ww_ctx = ww_ctx be chep enough? (Same
cacheline write and all that?)

Makes the above clearer in that you have

if (!ww_ctx) {
list_add_tail();
} else {
ret = __ww_mutex_add_waiter(); /* no need to handle !ww_ctx */
if (ret)
goto err_early_backoff;
}

waiter.ww_ctx = ww_ctx;
waiter.task = task;

>
> if (__mutex_waiter_is_first(lock, &waiter))
> __mutex_set_flag(lock, MUTEX_FLAG_WAITERS);
>
> - lock_contended(&lock->dep_map, ip);
> -
> set_task_state(task, state);
> for (;;) {
> /*
> @@ -693,8 +748,12 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
> * mutex_unlock() handing the lock off to us, do a trylock
> * before testing the error conditions to make sure we pick up
> * the handoff.
> + *
> + * For w/w locks, we always need to do this even if we're not
> + * currently the first waiter, because we may have been the
> + * first waiter during the unlock.
> */
> - if (__mutex_trylock(lock, first))
> + if (__mutex_trylock(lock, use_ww_ctx || first))

I'm not certain about the magic of first vs HANDOFF. Afaict, first ==
HANDOFF and this patch breaks that relationship. I think you need to add
bool handoff; as a separate tracker to first.

> goto acquired;
>
> /*
> @@ -716,7 +775,20 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
> spin_unlock_mutex(&lock->wait_lock, flags);
> schedule_preempt_disabled();
>
> - if (!first && __mutex_waiter_is_first(lock, &waiter)) {
> + if (use_ww_ctx && ww_ctx) {
> + /*
> + * Always re-check whether we're in first position. We
> + * don't want to spin if another task with a lower
> + * stamp has taken our position.
> + *
> + * We also may have to set the handoff flag again, if
> + * our position at the head was temporarily taken away.

Comment makes sense.

Ah. Should this be just if (use_ww_ctx) { /* always recheck... */ ?
Except that !ww_ctx are never gazzumped in the list, so if they are
first, then they are always first.

Could you explain that as well (about why !ww_ctx is special here but
not above). And then it can even be reduced to if (ww_ctx) {} to match
the first chunk if the revision is acceptable.
-Chris

--
Chris Wilson, Intel Open Source Technology Centre