Re: PI BUG with -rt13

From: Ingo Molnar
Date: Fri Nov 18 2005 - 04:37:25 EST

* David Singleton <dsingleton@xxxxxxxxxx> wrote:

> >I was testing PI support in the -rt tree (-rt13) when I hit the BUG
> >below. I am using the BULL/Montavista glibc patches. However I would
> >think this can be reproduced using just plain FUTEX_WAKE/WAIT_ROBUST
> >APIs as well, though I havent tried. I can send out the test code
> >if anybody is interested. I have attached the .config below.
> >
> >

> If I make the lock in the timer_base_s struct a raw spinlock this
> BUG goes away.

that most likely just papers over the real bug. Given task-reference
count bug i fixed in the robust/PI-futexes code (see the patch below) i
suspect some more races and/or plain incorrect code.

[this patch below also converts the robust/PI-futex code to use RCU
instead of the tasklist_lock - which should remove a major latency
source from the futex code].


Index: linux/kernel/rt.c
--- linux.orig/kernel/rt.c
+++ linux/kernel/rt.c
@@ -2939,15 +2939,20 @@ EXPORT_SYMBOL(rt_mutex_owned_by);
* and now own the lock, or negative values for failure, or positive
* values for the amount of time we waited before getting the lock.
-int fastcall down_futex(struct rt_mutex *lock, unsigned long time, pid_t owner_pid)
+int fastcall
+down_futex(struct rt_mutex *lock, unsigned long time, pid_t owner_pid)
struct task_struct *owner_task = NULL;
unsigned long eip = CALLER_ADDR0;
- read_lock(&tasklist_lock);
+ int ret;
+ rcu_read_lock();
owner_task = find_task_by_pid(owner_pid);
- read_unlock(&tasklist_lock);
+ if (!get_task_struct_rcu(owner_task))
+ owner_task = NULL;
+ rcu_read_unlock();

if (!owner_task)
@@ -2956,7 +2961,10 @@ int fastcall down_futex(struct rt_mutex
__down_mutex(lock __EIP__);
rt_mutex_set_owner(lock, owner_task->thread_info);
- return __down_interruptible(lock, time __EIP__);
+ ret = __down_interruptible(lock, time __EIP__);
+ put_task_struct(owner_task);
+ return ret;

To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at
Please read the FAQ at