Re: [PATCH -tip/master 4/7] locking/mutex: Refactor optimistic spinning code

From: Peter Zijlstra
Date: Mon Jul 28 2014 - 05:08:37 EST


On Sun, Jul 27, 2014 at 10:18:41PM -0700, Davidlohr Bueso wrote:
> @@ -180,6 +266,126 @@ static inline int mutex_can_spin_on_owner(struct mutex *lock)
> */
> return retval;
> }
> +
> +/*
> + * Atomically try to take the lock when it is available */

comment fail.

> +static inline bool mutex_try_to_acquire(struct mutex *lock)
> +{
> + return !mutex_is_locked(lock) &&
> + (atomic_cmpxchg(&lock->count, 1, 0) == 1);
> +}

> +static bool mutex_optimistic_spin(struct mutex *lock,
> + struct ww_acquire_ctx *ww_ctx, const bool use_ww_ctx)
> +{


> + /*
> + * If we fell out of the spin path because of need_resched(),
> + * reschedule now, before we try-lock the mutex. This avoids getting
> + * scheduled out right after we obtained the mutex.
> + */
> + if (need_resched())
> + schedule_preempt_disabled();
> +
> + return false;
> +}


> + if (mutex_optimistic_spin(lock, ww_ctx, use_ww_ctx)) {
> + /* got it, yay! */
> + preempt_enable();
> + return 0;
> }
> +
> /*
> * If we fell out of the spin path because of need_resched(),
> * reschedule now, before we try-lock the mutex. This avoids getting
> @@ -475,7 +512,7 @@ slowpath:
> */
> if (need_resched())
> schedule_preempt_disabled();
> +
> spin_lock_mutex(&lock->wait_lock, flags);

We now have two if (need_resched) schedule_preempt_disable() instances,
was that on purpose?

Attachment: pgpk5q9ggiqbU.pgp
Description: PGP signature