Commit 91d2a812dfb9 ("locking/rwsem: Make handoff writer optimistically
spin on owner") assumes that when the owner field is changed to NULL,
the lock will become free soon. That assumption may not be correct
especially if the handoff writer doing the spinning is a RT task which
may preempt another task from completing its action of either freeing
the rwsem or properly setting up owner.
To prevent this live lock scenario, we have to limit the number of
trylock attempts without sleeping. The current limit is now set to 8
to allow enough time for the other task to hopefully complete its action.
By adding new lock events to track the number of NULL owner retries with
handoff flag set before a successful trylock when running a 96 threads
locking microbenchmark with equal number of readers and writers running
on a 2-core 96-thread system for 15 seconds, the following stats are
obtained. Note that none of locking threads are RT tasks.
Retries of successful trylock Count
----------------------------- -----
1 1738
2 19
3 11
4 2
5 1
6 1
7 1
8 0
X 1
The last row is the one failed attempt that needs more than 8 retries.
So a retry count maximum of 8 should capture most of them if no RT task
is in the mix.
Fixes: 91d2a812dfb9 ("locking/rwsem: Make handoff writer optimistically spin on owner")
Reported-by: Mukesh Ojha <quic_mojha@xxxxxxxxxxx>
Signed-off-by: Waiman Long <longman@xxxxxxxxxx>
---
kernel/locking/rwsem.c | 18 ++++++++++++++++--
1 file changed, 16 insertions(+), 2 deletions(-)
diff --git a/kernel/locking/rwsem.c b/kernel/locking/rwsem.c
index 3839b38608da..12eb093328f2 100644
--- a/kernel/locking/rwsem.c
+++ b/kernel/locking/rwsem.c
@@ -1123,6 +1123,7 @@ static struct rw_semaphore __sched *
rwsem_down_write_slowpath(struct rw_semaphore *sem, int state)
{
struct rwsem_waiter waiter;
+ int null_owner_retries;
DEFINE_WAKE_Q(wake_q);
/* do optimistic spinning and steal lock if possible */
@@ -1164,7 +1165,7 @@ rwsem_down_write_slowpath(struct rw_semaphore *sem, int state)
set_current_state(state);
trace_contention_begin(sem, LCB_F_WRITE);
- for (;;) {
+ for (null_owner_retries = 0;;) {
if (rwsem_try_write_lock(sem, &waiter)) {
/* rwsem_try_write_lock() implies ACQUIRE on success */
break;
@@ -1190,8 +1191,21 @@ rwsem_down_write_slowpath(struct rw_semaphore *sem, int state)
owner_state = rwsem_spin_on_owner(sem);
preempt_enable();
- if (owner_state == OWNER_NULL)
+ /*
+ * owner is NULL doesn't guarantee the lock is free.
+ * An incoming reader will temporarily increment the
+ * reader count without changing owner and the
+ * rwsem_try_write_lock() will fails if the reader
+ * is not able to decrement it in time. Allow 8
+ * trylock attempts when hitting a NULL owner before
+ * going to sleep.
+ */
+ if ((owner_state == OWNER_NULL) &&
+ (null_owner_retries < 8)) {
+ null_owner_retries++;
goto trylock_again;
+ }
+ null_owner_retries = 0;
}
schedule();