[PATCH v2 1/5] rwsem: check the lock before cpmxchg indown_write_trylock
From: Tim Chen
Date: Mon Jun 24 2013 - 19:20:39 EST
Cmpxchg will cause the cacheline bouning when do the value checking,
that cause scalability issue in a large machine (like a 80 core box).
So a lock pre-read can relief this contention.
Signed-off-by: Alex Shi <alex.shi@xxxxxxxxx>
---
include/asm-generic/rwsem.h | 8 ++++----
1 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/include/asm-generic/rwsem.h b/include/asm-generic/rwsem.h
index bb1e2cd..5ba80e7 100644
--- a/include/asm-generic/rwsem.h
+++ b/include/asm-generic/rwsem.h
@@ -70,11 +70,11 @@ static inline void __down_write(struct rw_semaphore *sem)
static inline int __down_write_trylock(struct rw_semaphore *sem)
{
- long tmp;
+ if (unlikely(sem->count != RWSEM_UNLOCKED_VALUE))
+ return 0;
- tmp = cmpxchg(&sem->count, RWSEM_UNLOCKED_VALUE,
- RWSEM_ACTIVE_WRITE_BIAS);
- return tmp == RWSEM_UNLOCKED_VALUE;
+ return cmpxchg(&sem->count, RWSEM_UNLOCKED_VALUE,
+ RWSEM_ACTIVE_WRITE_BIAS) == RWSEM_UNLOCKED_VALUE;
}
/*
--
1.7.4.4
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/