Re: [tip: locking/core] lockdep: Fix lockdep recursion

From: Paul E. McKenney
Date: Tue Oct 13 2020 - 12:15:58 EST


On Tue, Oct 13, 2020 at 12:44:50PM +0200, Peter Zijlstra wrote:
> On Tue, Oct 13, 2020 at 12:34:06PM +0200, Peter Zijlstra wrote:
> > On Mon, Oct 12, 2020 at 02:28:12PM -0700, Paul E. McKenney wrote:
> > > It is certainly an accident waiting to happen. Would something like
> > > the following make sense?
> >
> > Sadly no.

Hey, I was hoping! ;-)

> > > ------------------------------------------------------------------------
> > >
> > > diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> > > index bfd38f2..52a63bc 100644
> > > --- a/kernel/rcu/tree.c
> > > +++ b/kernel/rcu/tree.c
> > > @@ -4067,6 +4067,7 @@ void rcu_cpu_starting(unsigned int cpu)
> > >
> > > rnp = rdp->mynode;
> > > mask = rdp->grpmask;
> > > + lockdep_off();
> > > raw_spin_lock_irqsave_rcu_node(rnp, flags);
> > > WRITE_ONCE(rnp->qsmaskinitnext, rnp->qsmaskinitnext | mask);
> > > newcpu = !(rnp->expmaskinitnext & mask);
> > > @@ -4086,6 +4087,7 @@ void rcu_cpu_starting(unsigned int cpu)
> > > } else {
> > > raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
> > > }
> > > + lockdep_on();
> > > smp_mb(); /* Ensure RCU read-side usage follows above initialization. */
> > > }
> >
> > This will just shut it up, but will not fix the actual problem of that
> > spin-lock ending up in trace_lock_acquire() which relies on RCU which
> > isn't looking.
> >
> > What we need here is to supress tracing not lockdep. Let me consider.

OK, I certainly didn't think in those terms.

> We appear to have a similar problem with rcu_report_dead(), it's
> raw_spin_unlock()s can end up in trace_lock_release() while we just
> killed RCU.

In theory, rcu_report_dead() is just fine. The reason is that a new
grace period that is ignoring the outgoing CPU cannot start until after:

1. This CPU releases the leaf rcu_node ->lock -and-

2. The grace-period kthread acquires this same lock.
Multiple times.

In practice, too bad about those diagnostics! :-(

So good catch!!!

Thanx, Paul