Re: [PATCH RT] futex/rtmutex: Cure RT double blocking issue

From: Wanpeng Li
Date: Wed May 10 2017 - 22:25:30 EST


2017-05-09 23:11 GMT+08:00 Thomas Gleixner <tglx@xxxxxxxxxxxxx>:
> RT has a problem when the wait on a futex/rtmutex got interrupted by a
> timeout or a signal. task->pi_blocked_on is still set when returning from
> rt_mutex_wait_proxy_lock(). The task must acquire the hash bucket lock
> after this.
>
> If the hash bucket lock is contended then the
> BUG_ON(rt_mutex_real_waiter(task->pi_blocked_on)) in
> task_blocks_on_rt_mutex() will trigger.
>
> This can be avoided by clearing task->pi_blocked_on in the return path of
> rt_mutex_wait_proxy_lock() which removes the task from the boosting chain
> of the rtmutex. That's correct because the task is not longer blocked on
> it.
>
> Signed-off-by: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
> Reported-by: Engleder Gerhard <eg@xxxxxxxx>
> ---
> kernel/locking/rtmutex.c | 17 +++++++++++++++++
> 1 file changed, 17 insertions(+)
>
> --- a/kernel/locking/rtmutex.c
> +++ b/kernel/locking/rtmutex.c
> @@ -2380,6 +2380,7 @@ int rt_mutex_wait_proxy_lock(struct rt_m
> struct hrtimer_sleeper *to,
> struct rt_mutex_waiter *waiter)
> {
> + struct task_struct *tsk = current;
> int ret;
>
> raw_spin_lock_irq(&lock->wait_lock);
> @@ -2389,6 +2390,22 @@ int rt_mutex_wait_proxy_lock(struct rt_m
> /* sleep on the mutex */
> ret = __rt_mutex_slowlock(lock, TASK_INTERRUPTIBLE, to, waiter, NULL);

Why not check the ret value to avoid lock/unlock tsk->pi_lock when
acquires the rt_mutex successfully?

Regards,
Wanpeng Li

>
> + /*
> + * RT has a problem here when the wait got interrupted by a timeout
> + * or a signal. task->pi_blocked_on is still set. The task must
> + * acquire the hash bucket lock when returning from this function.
> + *
> + * If the hash bucket lock is contended then the
> + * BUG_ON(rt_mutex_real_waiter(task->pi_blocked_on)) in
> + * task_blocks_on_rt_mutex() will trigger. This can be avoided by
> + * clearing task->pi_blocked_on which removes the task from the
> + * boosting chain of the rtmutex. That's correct because the task
> + * is not longer blocked on it.
> + */
> + raw_spin_lock(&tsk->pi_lock);
> + tsk->pi_blocked_on = NULL;
> + raw_spin_unlock(&tsk->pi_lock);
> +
> raw_spin_unlock_irq(&lock->wait_lock);
>
> return ret;