When the mutex unlock path is excuted with WAITERS bit and without
HANDOFF bit set, it will wake up the first task in wait_list. If
there are some tasks not in wait_list are stealing lock, it is very
likely successfully due to the task field of lock is NULL and flags
field is non-NULL. Then the HANDOFF bit will be cleared. But if the
HANDOFF bit was just set by the waked task in wait_list, this clearing
is unexcepted.
__mutex_lock_common __mutex_lock_common
__mutex_trylock schedule_preempt_disabled
/*steal lock successfully*/ __mutex_set_flag(lock,MUTEX_FLAG_HANDOFF)
__mutex_trylock_or_owner
if (task==NULL)
flags &= ~MUTEX_FLAG_HANDOFF
atomic_long_cmpxchg_acquire
__mutex_trylock //failed
mutex_optimistic_spin //failed
schedule_preempt_disabled //sleep without HANDOFF bit
So the HANDOFF bit should be set as late as possible, here we defer
it util the task is going to be scheduled.
Signed-off-by: Yanfei Xu <yanfei.xu@xxxxxxxxxxxxx>
---
Hi maintainers,
I am not very sure if I missed or misunderstanded something, please help
to review. Many thanks!
kernel/locking/mutex.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index 013e1b08a1bf..e57d920e96bf 100644
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -1033,17 +1033,17 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
}
spin_unlock(&lock->wait_lock);
+
+ if (first)
+ __mutex_set_flag(lock, MUTEX_FLAG_HANDOFF);
schedule_preempt_disabled();
/*
* ww_mutex needs to always recheck its position since its waiter
* list is not FIFO ordered.
*/
- if (ww_ctx || !first) {
+ if (ww_ctx || !first)
first = __mutex_waiter_is_first(lock, &waiter);
- if (first)
- __mutex_set_flag(lock, MUTEX_FLAG_HANDOFF);
- }
set_current_state(state);
/*