Re: [PATCH] locking/mutex: Reduce chance of setting HANDOFF bit on unlocked mutex

From: Waiman Long
Date: Wed Jun 30 2021 - 09:53:56 EST


On 6/30/21 6:21 AM, Peter Zijlstra wrote:
On Tue, Jun 29, 2021 at 04:11:38PM -0400, Waiman Long wrote:

diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index d2df5e68b503..472ab21b5b8e 100644
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -118,9 +118,9 @@ static inline struct task_struct *__mutex_trylock_or_owner(struct mutex *lock)
}
/*
- * We set the HANDOFF bit, we must make sure it doesn't live
- * past the point where we acquire it. This would be possible
- * if we (accidentally) set the bit on an unlocked mutex.
+ * Always clear the HANDOFF bit before acquiring the lock.
+ * Note that if the bit is accidentally set on an unlocked
+ * mutex, anyone can acquire it.
*/
flags &= ~MUTEX_FLAG_HANDOFF;
@@ -180,6 +180,11 @@ static inline void __mutex_set_flag(struct mutex *lock, unsigned long flag)
atomic_long_or(flag, &lock->owner);
}
+static inline long __mutex_fetch_set_flag(struct mutex *lock, unsigned long flag)
+{
+ return atomic_long_fetch_or_relaxed(flag, &lock->owner);
+}
+
static inline void __mutex_clear_flag(struct mutex *lock, unsigned long flag)
{
Hurmph, so we already have a cmpxchg loop in trylock, might as well make
that do exactly what we want without holes on.

How's something like the below? Boot tested, but please verify.

---
kernel/locking/mutex.c | 89 ++++++++++++++++++++++++++------------------------
1 file changed, 46 insertions(+), 43 deletions(-)

The code looks good to me. It is an even better approach to make sure that the HANDOFF will never be set on an unlocked mutex.

Reviewed-by: Waiman Long <longman@xxxxxxxxxx>