Re: BUG: MAX_STACK_TRACE_ENTRIES too low! (2)

From: Bart Van Assche
Date: Wed Jul 10 2019 - 17:23:45 EST


On 7/10/19 1:47 PM, Eric Dumazet wrote:
>
>
> On 7/10/19 9:09 PM, Bart Van Assche wrote:
>> On 7/10/19 11:44 AM, Eric Dumazet wrote:
>>> If anything using workqueues in dynamically allocated objects can turn off lockdep,
>>> we have a serious issue.
>>
>> As far as I know that issue is only hit by syzbot tests.
>
>
>
>> Anyway, I see
>> two possible paths forward:
>> * Revert the patch that makes workqueues use dynamic lockdep keys and
>> thereby reintroduce the false positives that lockdep reports if
>> different workqueues share a lockdep key. I think there is agreement
>> that having to analyze lockdep false positives is annoying, time
>> consuming and something nobody likes.
>> * Modify lockdep such that space in its fixed size arrays that is no
>> longer in use gets reused. Since the stack traces saved in the
>> stack_trace[] array have a variable size that array will have to be
>> compacted to avoid fragmentation.
>>
>
> Can not destroy_workqueue() undo what alloc_workqueue() did ?

destroy_workqueue() already calls lockdep_unregister_key(). If that
wouldn't happen then sooner or later one of the warning statements in
the lockdep code would be triggered, e.g. the WARN_ON_ONCE() statement
in lockdep_register_key(). I think the root cause is that
lockdep_unregister_key() does not free the stack traces recorded by the
lockdep save_trace() function. save_trace() is called every time a lock
is acquired and lockdep encounters a new lock dependency chain.

As one can see in remove_class_from_lock_chain() there is already code
present in lockdep for compacting the chain_hlocks[] array. Similar code
is not yet available for the stack_trace[] array because I had not
encountered any overflows of that array during my tests.

Bart.