Re: [PATCH v4 05/15] lockdep: Make check_prev_add can use a separate stack_trace

From: Byungchul Park
Date: Wed Jan 18 2017 - 22:19:01 EST


On Wed, Jan 18, 2017 at 04:10:53PM +0100, Peter Zijlstra wrote:
> On Wed, Jan 18, 2017 at 11:04:32AM +0900, Byungchul Park wrote:
> > On Tue, Jan 17, 2017 at 04:54:31PM +0100, Peter Zijlstra wrote:
> > > On Fri, Jan 13, 2017 at 07:11:43PM +0900, Byungchul Park wrote:
> > > > What do you think about the following patches doing it?
> > >
> > > I was more thinking about something like so...
> > >
> > > Also, I think I want to muck with struct stack_trace; the members:
> > > max_nr_entries and skip are input arguments to save_stack_trace() and
> > > bloat the structure for no reason.
> >
> > With your approach, save_trace() must be called whenever check_prevs_add()
> > is called, which might be unnecessary.
>
> True.. but since we hold the graph_lock this is a slow path anyway, so I
> didn't care much.

If we don't need to care it, the problem becomes easy to solve. But IMHO,
it'd be better to care it as original lockdep code did, because
save_trace() might have bigger overhead than we expect and
check_prevs_add() can be called frequently, so it'd be better to avoid it
when possible.

> Then again, I forgot to clean up in a bunch of paths.
>
> > Frankly speaking, I think what I proposed resolved it neatly. Don't you
> > think so?
>
> My initial reaction was to your patches being radically different to
> what I had proposed. But after fixing mine I don't particularly like
> either one of them.
>
> Also, I think yours has a hole in, you check nr_stack_trace_entries
> against an older copy to check we did save_stack(), this is not accurate
> as check_prev_add() can drop graph_lock in the verbose case and then
> someone else could have done save_stack().

Right. My mistake..

Then.. The following patch on top of my patch 2/2 can solve it. Right?

---

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 49b9386..0f5bded 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -1892,7 +1892,7 @@ static inline void inc_chains(void)
if (entry->class == hlock_class(next)) {
if (distance == 1)
entry->distance = 1;
- return 2;
+ return 1;
}
}

@@ -1927,9 +1927,10 @@ static inline void inc_chains(void)
print_lock_name(hlock_class(next));
printk(KERN_CONT "\n");
dump_stack();
- return graph_lock();
+ if (!graph_lock())
+ return 0;
}
- return 1;
+ return 2;
}

/*
@@ -1975,15 +1976,16 @@ static inline void inc_chains(void)
* added:
*/
if (hlock->read != 2 && hlock->check) {
- if (!check_prev_add(curr, hlock, next,
- distance, &trace, save))
+ int ret = check_prev_add(curr, hlock, next,
+ distance, &trace, save);
+ if (!ret)
return 0;

/*
* Stop saving stack_trace if save_trace() was
* called at least once:
*/
- if (save && start_nr != nr_stack_trace_entries)
+ if (save && ret == 2)
save = NULL;

/*