Re: [PATCH -v4 8/8] locking/mutex: Enable optimistic spinning of woken waiter
From: Will Deacon
Date: Thu Oct 13 2016 - 11:28:22 EST
On Fri, Oct 07, 2016 at 04:52:51PM +0200, Peter Zijlstra wrote:
> From: Waiman Long <Waiman.Long@xxxxxxx>
>
> This patch makes the waiter that sets the HANDOFF flag start spinning
> instead of sleeping until the handoff is complete or the owner
> sleeps. Otherwise, the handoff will cause the optimistic spinners to
> abort spinning as the handed-off owner may not be running.
>
> Cc: Ingo Molnar <mingo@xxxxxxxxxx>
> Cc: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx>
> Cc: Tim Chen <tim.c.chen@xxxxxxxxxxxxxxx>
> Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
> Cc: Imre Deak <imre.deak@xxxxxxxxx>
> Cc: Jason Low <jason.low2@xxxxxxx>
> Cc: "Paul E. McKenney" <paulmck@xxxxxxxxxx>
> Cc: Ding Tianhong <dingtianhong@xxxxxxxxxx>
> Cc: Davidlohr Bueso <dave@xxxxxxxxxxxx>
> Cc: Will Deacon <Will.Deacon@xxxxxxx>
> Signed-off-by: Waiman Long <Waiman.Long@xxxxxxx>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx>
> ---
> kernel/locking/mutex.c | 77 ++++++++++++++++++++++++++++++++++---------------
> 1 file changed, 54 insertions(+), 23 deletions(-)
>
> --- a/kernel/locking/mutex.c
> +++ b/kernel/locking/mutex.c
> @@ -416,24 +416,39 @@ static inline int mutex_can_spin_on_owne
> *
> * Returns true when the lock was taken, otherwise false, indicating
> * that we need to jump to the slowpath and sleep.
> + *
> + * The waiter flag is set to true if the spinner is a waiter in the wait
> + * queue. The waiter-spinner will spin on the lock directly and concurrently
> + * with the spinner at the head of the OSQ, if present, until the owner is
> + * changed to itself.
> */
> static bool mutex_optimistic_spin(struct mutex *lock,
> - struct ww_acquire_ctx *ww_ctx, const bool use_ww_ctx)
> + struct ww_acquire_ctx *ww_ctx,
> + const bool use_ww_ctx, const bool waiter)
> {
> struct task_struct *task = current;
>
> - if (!mutex_can_spin_on_owner(lock))
> - goto done;
> + if (!waiter) {
> + /*
> + * The purpose of the mutex_can_spin_on_owner() function is
> + * to eliminate the overhead of osq_lock() and osq_unlock()
> + * in case spinning isn't possible. As a waiter-spinner
> + * is not going to take OSQ lock anyway, there is no need
> + * to call mutex_can_spin_on_owner().
> + */
> + if (!mutex_can_spin_on_owner(lock))
> + goto fail;
>
> - /*
> - * In order to avoid a stampede of mutex spinners trying to
> - * acquire the mutex all at once, the spinners need to take a
> - * MCS (queued) lock first before spinning on the owner field.
> - */
> - if (!osq_lock(&lock->osq))
> - goto done;
> + /*
> + * In order to avoid a stampede of mutex spinners trying to
> + * acquire the mutex all at once, the spinners need to take a
> + * MCS (queued) lock first before spinning on the owner field.
> + */
> + if (!osq_lock(&lock->osq))
> + goto fail;
> + }
>
> - while (true) {
> + for (;;) {
> struct task_struct *owner;
>
> if (use_ww_ctx && ww_ctx->acquired > 0) {
> @@ -449,7 +464,7 @@ static bool mutex_optimistic_spin(struct
> * performed the optimistic spinning cannot be done.
> */
> if (READ_ONCE(ww->ctx))
> - break;
> + goto fail_unlock;
> }
>
> /*
> @@ -457,15 +472,20 @@ static bool mutex_optimistic_spin(struct
> * release the lock or go to sleep.
> */
> owner = __mutex_owner(lock);
> - if (owner && !mutex_spin_on_owner(lock, owner))
> - break;
> + if (owner) {
> + if (waiter && owner == task) {
> + smp_mb(); /* ACQUIRE */
Hmm, is this barrier actually needed? This only happens on the handoff path,
and we take the wait_lock immediately after this succeeds anyway. That
control dependency, coupled with the acquire semantics of the spin_lock,
should be sufficient, no?
Will