Re: [tip: locking/core] lockdep: Fix usage_traceoverflow

From: Boqun Feng
Date: Thu Oct 29 2020 - 23:52:08 EST


Hi Peter,

On Wed, Oct 28, 2020 at 08:59:10PM +0100, Peter Zijlstra wrote:
> On Wed, Oct 28, 2020 at 08:42:09PM +0100, Peter Zijlstra wrote:
> > On Wed, Oct 28, 2020 at 05:40:48PM +0000, Chris Wilson wrote:
> > > Quoting Chris Wilson (2020-10-27 16:34:53)
> > > > Quoting Peter Zijlstra (2020-10-27 15:45:33)
> > > > > On Tue, Oct 27, 2020 at 01:29:10PM +0000, Chris Wilson wrote:
> > > > >
> > > > > > <4> [304.908891] hm#2, depth: 6 [6], 3425cfea6ff31f7f != 547d92e9ec2ab9af
> > > > > > <4> [304.908897] WARNING: CPU: 0 PID: 5658 at kernel/locking/lockdep.c:3679 check_chain_key+0x1a4/0x1f0
> > > > >
> > > > > Urgh, I don't think I've _ever_ seen that warning trigger.
> > > > >
> > > > > The comments that go with it suggest memory corruption is the most
> > > > > likely trigger of it. Is it easy to trigger?
> > > >
> > > > For the automated CI, yes, the few machines that run that particular HW
> > > > test seem to hit it regularly. I have not yet reproduced it for myself.
> > > > I thought it looked like something kasan would provide some insight for
> > > > and we should get a kasan run through CI over the w/e. I suspect we've
> > > > feed in some garbage and called it a lock.
> > >
> > > I tracked it down to a second invocation of lock_acquire_shared_recursive()
> > > intermingled with some other regular mutexes (in this case ww_mutex).
> > >
> > > We hit this path in validate_chain():
> > > /*
> > > * Mark recursive read, as we jump over it when
> > > * building dependencies (just like we jump over
> > > * trylock entries):
> > > */
> > > if (ret == 2)
> > > hlock->read = 2;
> > >
> > > and that is modifying hlock_id() and so the chain-key, after it has
> > > already been computed.
> >
> > Ooh, interesting.. I'll have to go look at this in the morning, brain is
> > fried already. Thanks for digging into it.
>

Sorry for the late response.

> So that's commit f611e8cf98ec ("lockdep: Take read/write status in
> consideration when generate chainkey") that did that.
>

Yeah, I think that's related, howver ...

> So validate_chain() requires the new chain_key, but can change ->read
> which then invalidates the chain_key we just calculated.
>
> This happens when check_deadlock() returns 2, which only happens when:
>
> - next->read == 2 && ... ; however @hext is our @hlock, so that's
> pointless
>

I don't think we should return 2 (earlier) in this case anymore. Because
now we have recursive read deadlock detection, it's safe to add dep:
"prev -> next" in the dependency graph. I think we can just continue in
this case. Actually I think this is something I'm missing in my
recursive read detection patchset :-/

> - when there's a nest_lock involved ; ww_mutex uses that !!!
>

That leaves check_deadlock() return 2 only if hlock is a nest_lock, and
...

> I suppose something like the below _might_ just do it, but I haven't
> compiled it, and like said, my brain is fried.
>
> Boqun, could you have a look, you're a few timezones ahead of us so your
> morning is earlier ;-)
>
> ---
>
> diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
> index 3e99dfef8408..3caf63532bc2 100644
> --- a/kernel/locking/lockdep.c
> +++ b/kernel/locking/lockdep.c
> @@ -3556,7 +3556,7 @@ static inline int lookup_chain_cache_add(struct task_struct *curr,
>
> static int validate_chain(struct task_struct *curr,
> struct held_lock *hlock,
> - int chain_head, u64 chain_key)
> + int chain_head, u64 *chain_key)
> {
> /*
> * Trylock needs to maintain the stack of held locks, but it
> @@ -3568,6 +3568,7 @@ static int validate_chain(struct task_struct *curr,
> * (If lookup_chain_cache_add() return with 1 it acquires
> * graph_lock for us)
> */
> +again:
> if (!hlock->trylock && hlock->check &&
> lookup_chain_cache_add(curr, hlock, chain_key)) {
> /*
> @@ -3597,8 +3598,12 @@ static int validate_chain(struct task_struct *curr,
> * building dependencies (just like we jump over
> * trylock entries):
> */
> - if (ret == 2)
> + if (ret == 2) {
> hlock->read = 2;
> + *chain_key = iterate_chain_key(hlock->prev_chain_key, hlock_id(hlock));

If "ret == 2" means hlock is a a nest_lock, than we don't need the
"->read = 2" trick here and we don't need to update chain_key either.
We used to have this "->read = 2" only because we want to skip the
dependency adding step afterwards. So how about the following:

It survived a lockdep selftest at boot time.

Regards,
Boqun

----------------------------->8
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 3e99dfef8408..b23ca6196561 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -2765,7 +2765,7 @@ print_deadlock_bug(struct task_struct *curr, struct held_lock *prev,
* (Note that this has to be done separately, because the graph cannot
* detect such classes of deadlocks.)
*
- * Returns: 0 on deadlock detected, 1 on OK, 2 on recursive read
+ * Returns: 0 on deadlock detected, 1 on OK, 2 on nest_lock
*/
static int
check_deadlock(struct task_struct *curr, struct held_lock *next)
@@ -2788,7 +2788,7 @@ check_deadlock(struct task_struct *curr, struct held_lock *next)
* lock class (i.e. read_lock(lock)+read_lock(lock)):
*/
if ((next->read == 2) && prev->read)
- return 2;
+ continue;

/*
* We're holding the nest_lock, which serializes this lock's
@@ -3592,16 +3592,9 @@ static int validate_chain(struct task_struct *curr,

if (!ret)
return 0;
- /*
- * Mark recursive read, as we jump over it when
- * building dependencies (just like we jump over
- * trylock entries):
- */
- if (ret == 2)
- hlock->read = 2;
/*
* Add dependency only if this lock is not the head
- * of the chain, and if it's not a secondary read-lock:
+ * of the chain, and if it's not a nest_lock:
*/
if (!chain_head && ret != 2) {
if (!check_prevs_add(curr, hlock))