[RFC PATCH 3/3] locking/mutex: Optimize mutex trylock slowpath

From: Jason Low
Date: Wed Jun 04 2014 - 15:08:46 EST


In __mutex_trylock_slowpath(), we acquire the wait_lock spinlock,
xchg() lock->count with -1, then set lock->count back to 0 if there
are no waiters, and return true if the prev lock count was 1.

However, if we the mutex is already locked, then there may not be
much point in attempting the above operations. In this patch, we only
attempt the above operations if the mutex is unlocked.

The new MUTEX_IS_UNLOCKED() macro is also used for this.

Signed-off-by: Jason Low <jason.low2@xxxxxx>
---
kernel/locking/mutex.c | 27 ++++++++++++++++-----------
1 files changed, 16 insertions(+), 11 deletions(-)

diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index fc55f72..c65680d 100644
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -821,21 +821,26 @@ static inline int __mutex_trylock_slowpath(atomic_t *lock_count)
{
struct mutex *lock = container_of(lock_count, struct mutex, count);
unsigned long flags;
- int prev;
+ int prev = 0;

- spin_lock_mutex(&lock->wait_lock, flags);
+ /*
+ * Only need to trylock the mutex if it is unlocked.
+ */
+ if (MUTEX_IS_UNLOCKED(lock)) {
+ spin_lock_mutex(&lock->wait_lock, flags);

- prev = atomic_xchg(&lock->count, -1);
- if (likely(prev == 1)) {
- mutex_set_owner(lock);
- mutex_acquire(&lock->dep_map, 0, 1, _RET_IP_);
- }
+ prev = atomic_xchg(&lock->count, -1);
+ if (likely(prev == 1)) {
+ mutex_set_owner(lock);
+ mutex_acquire(&lock->dep_map, 0, 1, _RET_IP_);
+ }

- /* Set it back to 0 if there are no waiters: */
- if (likely(list_empty(&lock->wait_list)))
- atomic_set(&lock->count, 0);
+ /* Set it back to 0 if there are no waiters: */
+ if (likely(list_empty(&lock->wait_list)))
+ atomic_set(&lock->count, 0);

- spin_unlock_mutex(&lock->wait_lock, flags);
+ spin_unlock_mutex(&lock->wait_lock, flags);
+ }

return prev == 1;
}
--
1.7.1

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/