On Sun, 2014-08-03 at 22:36 -0400, Waiman Long wrote:
On a highly contended rwsem, spinlock contention due to the slowLike with other locks, should we make this "osq_is_locked"? We can still
rwsem_wake() call can be a significant portion of the total CPU cycles
used. With writer lock stealing and writer optimistic spinning, there
is also a pretty good chance that the lock may have been stolen
before the waker wakes up the waiters. The woken tasks, if any,
will have to go back to sleep again.
This patch adds checking code at the beginning of the rwsem_wake()
and __rwsem_do_wake() function to look for spinner and active
writer respectively. The presence of an active writer will abort the
wakeup operation. The presence of a spinner will still allow wakeup
operation to proceed as long as the trylock operation succeeds. This
strikes a good balance between excessive spinlock contention especially
when there are a lot of active readers and a lot of failed fastpath
operations because there are tasks waiting in the queue.
Signed-off-by: Waiman Long<Waiman.Long@xxxxxx>
---
include/linux/osq_lock.h | 5 ++++
kernel/locking/rwsem-xadd.c | 57 ++++++++++++++++++++++++++++++++++++++++++-
2 files changed, 61 insertions(+), 1 deletions(-)
diff --git a/include/linux/osq_lock.h b/include/linux/osq_lock.h
index 90230d5..79db546 100644
--- a/include/linux/osq_lock.h
+++ b/include/linux/osq_lock.h
@@ -24,4 +24,9 @@ static inline void osq_lock_init(struct optimistic_spin_queue *lock)
atomic_set(&lock->tail, OSQ_UNLOCKED_VAL);
}
+static inline bool osq_has_spinner(struct optimistic_spin_queue *lock)
+{
+ return atomic_read(&lock->tail) != OSQ_UNLOCKED_VAL;
+}
add the rwsem has_spinner() abstractions which makes use of
osq_is_locked() if we want.