Re: [BUG] 2.6.28-git LOCKDEP: Possible recursive rq->lock

From: Vaidyanathan Srinivasan
Date: Wed Jan 07 2009 - 11:33:28 EST


* Peter Zijlstra <a.p.zijlstra@xxxxxxxxx> [2009-01-07 15:28:57]:

> On Wed, 2009-01-07 at 19:50 +0530, Vaidyanathan Srinivasan wrote:
> > * Peter Zijlstra <a.p.zijlstra@xxxxxxxxx> [2009-01-07 14:12:43]:
> >
> > > On Wed, 2009-01-07 at 17:59 +0530, Vaidyanathan Srinivasan wrote:
> > >
> > > > =============================================
> > > > [ INFO: possible recursive locking detected ]
> > > > 2.6.28-autotest-tip-sv #1
> > > > ---------------------------------------------
> > > > klogd/5062 is trying to acquire lock:
> > > > (&rq->lock){++..}, at: [<ffffffff8022aca2>] task_rq_lock+0x45/0x7e
> > > >
> > > > but task is already holding lock:
> > > > (&rq->lock){++..}, at: [<ffffffff805f7354>] schedule+0x158/0xa31
> > > >
> > > > other info that might help us debug this:
> > > > 1 lock held by klogd/5062:
> > > > #0: (&rq->lock){++..}, at: [<ffffffff805f7354>] schedule+0x158/0xa31
> > > >
> > > > stack backtrace:
> > > > Pid: 5062, comm: klogd Not tainted 2.6.28-autotest-tip-sv #1
> > > > Call Trace:
> > > > [<ffffffff80259ef1>] __lock_acquire+0xeb9/0x16a4
> > > > [<ffffffff8025a6c0>] ? __lock_acquire+0x1688/0x16a4
> > > > [<ffffffff8025a761>] lock_acquire+0x85/0xa9
> > > > [<ffffffff8022aca2>] ? task_rq_lock+0x45/0x7e
> > > > [<ffffffff805fa4d4>] _spin_lock+0x31/0x66
> > > > [<ffffffff8022aca2>] ? task_rq_lock+0x45/0x7e
> > > > [<ffffffff8022aca2>] task_rq_lock+0x45/0x7e
> > > > [<ffffffff80233363>] try_to_wake_up+0x88/0x27a
> > > > [<ffffffff80233581>] wake_up_process+0x10/0x12
> > > > [<ffffffff805f775c>] schedule+0x560/0xa31
> > >
> > > I'd be most curious to know where in schedule we are.
> >
> > ok, we are in sched.c:3777
> >
> > double_unlock_balance(this_rq, busiest);
> > if (active_balance)
> > >>>>>>>>>>> wake_up_process(busiest->migration_thread);
> >
> > } else
> >
> > In active balance in newidle. This implies sched_mc was 2 at that time.
> > let me trace this and debug further.
>
> How about something like this? Strictly speaking we'll not deadlock,
> because ttwu will not be able to place the migration task on our rq, but
> since the code can deal with both rqs getting unlocked, this seems the
> easiest way out.

Hi Peter,

I agree. Unlocking this_rq is an easy way out. Thanks for the
suggestion. I have moved the unlock and lock withing the if
condition.

--Vaidy

sched: bug fix -- do not call ttwu while holding rq->lock

When sched_mc=2 wake_up_process() is called on busiest_rq
while holding this_rq lock in load_balance_newidle()
Though this will not deadlock, this is a lockdep warning
and the situation is easily solved by releasing the this_rq
lock at this point in code

Signed-off-by: Vaidyanathan Srinivasan <svaidy@xxxxxxxxxxxxxxxxxx>

diff --git a/kernel/sched.c b/kernel/sched.c
index 71a054f..703a669 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -3773,8 +3773,12 @@ redo:
}

double_unlock_balance(this_rq, busiest);
- if (active_balance)
+ if (active_balance) {
+ /* Should not call ttwu while holding a rq->lock */
+ spin_unlock(&this_rq->lock);
wake_up_process(busiest->migration_thread);
+ spin_lock(&this_rq->lock);
+ }

} else
sd->nr_balance_failed = 0;

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/