Re: [PATCH] locking/lockdep: Fix expected depth value in __lock_release()
From: Jerome Glisse
Date: Tue Oct 20 2015 - 10:40:29 EST
On Tue, Oct 20, 2015 at 03:07:29PM +0200, Peter Zijlstra wrote:
> On Tue, Oct 20, 2015 at 08:42:19AM -0400, Jerome Glisse wrote:
> > On Tue, Oct 20, 2015 at 02:18:53PM +0200, Peter Zijlstra wrote:
>
> > > What code did you find that triggered this? That is, what code is taking
> > > nested locks with other locks in the middle? (Not wrong per-se, just
> > > curious how that would come about).
> >
> > Well i am not able to reproduce myself but it happens as part of
> > mm_drop_all_locks() as to which lock does trigger i am unsure as
> > all the i_mmap_rwsem are taken one after the other and same for
> > anon_vma rwsem so they should already coalesce properly. My guess
> > is that code calling all lock also have a mutex and once all vma
> > lock are drop the mutex coalesce with mm_all_locks_mutex.
>
> Ah yes, mm_all_locks_mutex looks like a likely candidate.
>
> Curious, this code is ancient, and I've never seen a report of this
> triggering.
>
> > > > This patch adjust the expect depth value by decrementing it if
> > > > what was previously 2 entry inside the stack are coalesced into
> > > > only one entry.
> > >
> > > Would it not make more sense to scan the entire hlock stack on
> > > __lock_acquire() and avoid getting collapsible entries in the first
> > > place?
> > >
> > > Something like so...
> >
> > It would work too, probably more compute intensive than my solution
> > but this is lockdep code so i guess it is fine. Also dunno if we loose
> > any valuable information by not keeping the stack ordered so one
> > can check order in whick lock are taken.
>
> Right; its a little bit more expensive, but only for acquires with
> nest_lock set, which should be rare.
>
> As to the order; since they're all of the same class, its fine to
> collapse them.
>
> However the proposed alternative avoids 'strange' boundary cases like:
>
> mutex_lock(&top_lock);
>
> for (...) {
> mutex_lock_nest_lock(&obj->lock1, &top_lock);
> mutex_lock_nest_lock(&obj->lock2, &top_lock);
> }
>
> Which would currently result in running our of lock stack space real
> quick since it would never be able to collapse.
>
> In any case, can you 'test' the proposed alternative in any way?
I will ask for it to be tested probably gonna take couple days before
i hear back. I will report as soon as i have confirmation.
Cheers,
Jérôme
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/