Re: while_each_thread() under rcu_read_lock() is broken?

From: Mandeep Baines
Date: Sat Jun 19 2010 - 01:01:23 EST


On Fri, Jun 18, 2010 at 12:34 PM, Oleg Nesterov <oleg@xxxxxxxxxx> wrote:
> (add cc's)
>
> Hmm. Once I sent this patch, I suddenly realized with horror that
> while_each_thread() is NOT safe under rcu_read_lock(). Both
> do_each_thread/while_each_thread or do/while_each_thread() can
> race with exec().
>
> Yes, it is safe to do next_thread() or next_task(). But:
>
>        #define while_each_thread(g, t) \
>                while ((t = next_thread(t)) != g)
>
> suppose that t is not the group leader, and it does de_thread() and then
> release_task(g). After that next_thread(t) returns t, not g, and the loop
> will never stop.
>
> I _really_ hope I missed something, will recheck tomorrow with the fresh
> head. Still I'd like to share my concerns...
>

Yep. You're right. Not sure what I was thinking. This is only case
where do_each_thread
is protected by an rcu_read_lock. All others, correctly use read_lock.

> If I am right, probably we can fix this, something like
>
>        #define while_each_thread(g, t) \
>                while ((t = next_thread(t)) != g && pid_alive(g))
>

This seems like a reasonable approach. Maybe call it:

while_each_thread_maybe_rcu() :)

This makes hung_task a little less useful for failure fencing since
this (and rcu_lock_break)
increases the potential for never examining all threads but its still
a nice lightweight diagnostic
for finding bugs.

> [we can't do while (!thread_group_leadr(t = next_thread(t)))].
> but this needs barrires, and we should validate the callers anyway.
>
> Or, perhaps,
>
>        #define XXX(t)  ({
>                struct task_struct *__prev = t;
>                t = next_thread(t);
>                t != g && t != __prev;
>        })
>
>        #define while_each_thread(g, t) \
>                while (XXX(t))
>
> Please tell me I am wrong!
>
> Oleg.
>
> On 06/18, Oleg Nesterov wrote:
>>
>> check_hung_uninterruptible_tasks()->rcu_lock_break() introduced by
>> "softlockup: check all tasks in hung_task" commit ce9dbe24 looks
>> absolutely wrong.
>>
>>       - rcu_lock_break() does put_task_struct(). If the task has exited
>>         it is not safe to even read its ->state, nothing protects this
>>         task_struct.
>>
>>       - The TASK_DEAD checks are wrong too. Contrary to the comment, we
>>         can't use it to check if the task was unhashed. It can be unhashed
>>         without TASK_DEAD, or it can be valid with TASK_DEAD.
>>
>>         For example, an autoreaping task can do release_task(current)
>>         long before it sets TASK_DEAD in do_exit().
>>
>>         Or, a zombie task can have ->state == TASK_DEAD but release_task()
>>         was not called, and in this case we must not break the loop.
>>
>> Change this code to check pid_alive() instead, and do this before we
>> drop the reference to the task_struct.
>>
>> Signed-off-by: Oleg Nesterov <oleg@xxxxxxxxxx>
>> ---
>>
>>  kernel/hung_task.c |   11 +++++++----
>>  1 file changed, 7 insertions(+), 4 deletions(-)
>>
>> --- 35-rc2/kernel/hung_task.c~CHT_FIX_RCU_LOCK_BREAK  2009-12-18 19:05:38.000000000 +0100
>> +++ 35-rc2/kernel/hung_task.c 2010-06-18 20:06:11.000000000 +0200
>> @@ -113,15 +113,20 @@ static void check_hung_task(struct task_
>>   * For preemptible RCU it is sufficient to call rcu_read_unlock in order
>>   * exit the grace period. For classic RCU, a reschedule is required.
>>   */
>> -static void rcu_lock_break(struct task_struct *g, struct task_struct *t)
>> +static bool rcu_lock_break(struct task_struct *g, struct task_struct *t)
>>  {
>> +     bool can_cont;
>> +
>>       get_task_struct(g);
>>       get_task_struct(t);
>>       rcu_read_unlock();
>>       cond_resched();
>>       rcu_read_lock();
>> +     can_cont = pid_alive(g) && pid_alive(t);
>>       put_task_struct(t);
>>       put_task_struct(g);
>> +
>> +     return can_cont;
>>  }
>>
>>  /*
>> @@ -148,9 +153,7 @@ static void check_hung_uninterruptible_t
>>                       goto unlock;
>>               if (!--batch_count) {
>>                       batch_count = HUNG_TASK_BATCHING;
>> -                     rcu_lock_break(g, t);
>> -                     /* Exit if t or g was unhashed during refresh. */
>> -                     if (t->state == TASK_DEAD || g->state == TASK_DEAD)
>> +                     if (!rcu_lock_break(g, t))
>>                               goto unlock;
>>               }
>>               /* use "==" to skip the TASK_KILLABLE tasks waiting on NFS */
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/