Re: possible deadlock in lru_add_drain_all
From: Peter Zijlstra
Date: Tue Oct 31 2017 - 11:10:49 EST
On Tue, Oct 31, 2017 at 03:58:04PM +0100, Michal Hocko wrote:
> On Tue 31-10-17 15:52:47, Peter Zijlstra wrote:
> [...]
> > If we want to save those stacks; we have to save a stacktrace on _every_
> > lock acquire, simply because we never know ahead of time if there will
> > be a new link. Doing this is _expensive_.
> >
> > Furthermore, the space into which we store stacktraces is limited;
> > since memory allocators use locks we can't very well use dynamic memory
> > for lockdep -- that would give recursive and robustness issues.
>
> Wouldn't stackdepot help here? Sure the first stack unwind will be
> costly but then you amortize that over time. It is quite likely that
> locks are held from same addresses.
I'm not familiar with that; but looking at it, no. It uses alloc_pages()
which has locks in and it has a lock itself.
Also, it seems to index the stack based on the entire stacktrace; which
means you actually have to have the stacktrace first. And doing
stacktraces on every single acquire is horrendously expensive.
The idea just saves on storage, it doesn't help with having to do a
gazillion of unwinds in the first place.