[PATCH v7 00/23] locking/lockdep: Add support for dynamic keys

From: Bart Van Assche
Date: Thu Feb 14 2019 - 18:01:29 EST


Hi Peter and Ingo,

A known shortcoming of the current lockdep implementation is that it requires
lock keys to be allocated statically. This forces certain unrelated
synchronization objects to share keys and this key sharing can cause false
positive deadlock reports. This patch series adds support for dynamic keys in
the lockdep code and eliminates a class of false positive reports from the
workqueue implementation.

Please consider these patches for kernel v5.1.

Thanks,

Bart.

The changes compared to v6 are:
- For delayed freeing, adopted Peter's approach since that approach does not
require to sleep in the context from which data structures are freed.
- Instead of delayed freeing list_entries[] elements, free these immediately.
- Added two patches that fix a false positive lockdep complaint in the block
layer.
- Split several patches to make these easier to read.

The changes compared to v5 are:
- Modified zap_class() such that it doesn't try to free a list entry that
is already being freed.
- Added a patch that fixes an existing bug in add_chain_cache().
- Improved the code that reports the size needed for lockdep data structures
further.
- Rebased and retested this patch series on top of kernel v5.0-rc1.

The changes compared to v4 are:
- Introduced the function lockdep_set_selftest_task() to fix a build failure
for CONFIG_LOCKDEP=n.
- Fixed a use-after-free issue in is_dynamic_key() by adding the following
code in that function: if (!debug_locks) return true;
- Changed if (WARN_ON_ONCE(!pf)) into if (!pf) to avoid that the new lockdep
implementation triggers more kernel warnings than the current implementation.
This keeps the build happy when doing regression tests.
- Added a synchronize_rcu() call at the end of lockdep_unregister_key() to
avoid a use-after-free.

The changes compared to v3 are:
- Rework the code that frees objects that are no longer used such that it
is now guaranteed that a grace period elapses between last use and freeing.
- The lockdep self tests pass again.
- Avoid that the patch that removes all matching lock order entries can
cause list corruption. Note: the change in this patch to realize that
is removed again by a later patch. In other words, this change is only
necessary to make the series bisectable.
- Rebased this patch series on top of the tip/locking/core branch.

The changes compared to v2 are:
- Made sure that all schedule_free_zapped_classes() calls are protected
with the graph lock.
- When removing a lock class, only recalculate lock chains that have been
modified.
- Combine a list_del() and list_add_tail() call into a list_move_tail()
call in register_lock_class().
- Use an RCU read lock instead of the graph lock inside is_dynamic_key().

The changes compared to v1 are:
- Addressed Peter's review comments: remove the list_head that I had added
to struct lock_list again, replaced all_list_entries and free_list_entries
by two bitmaps, use call_rcu() to free lockdep objects, add a BUILD_BUG_ON()
that compares the size of struct lock_class_key and raw_spin_lock_t.
- Addressed the "unknown symbol" errors reported by the build bot by adding a
few #ifdef / #endif directives. Addressed the 32-bit warnings by using %d
instead of %ld for array indices and by casting the array indices to
unsigned int.
- Removed several WARN_ON_ONCE(!class->hash_entry.pprev) statements since
these duplicate the code in check_data_structures().
- Left out the patch that causes lockdep to complain if no name has been
assigned to a lock object. That patch namely causes the build bot to
complain about certain lock objects but I have not yet had the time to
figure out the identity of these lock objects.

Bart Van Assche (23):
locking/lockdep: Fix two 32-bit compiler warnings
locking/lockdep: Fix reported required memory size (1/2)
locking/lockdep: Fix reported required memory size (2/2)
locking/lockdep: Avoid that add_chain_cache() adds an invalid chain to
the cache
locking/lockdep: Reorder struct lock_class members
locking/lockdep: Make zap_class() remove all matching lock order
entries
locking/lockdep: Initialize the locks_before and locks_after lists
earlier
locking/lockdep: Split lockdep_free_key_range() and
lockdep_reset_lock()
locking/lockdep: Make it easy to detect whether or not inside a
selftest
locking/lockdep: Update two outdated comments
locking/lockdep: Free lock classes that are no longer in use
locking/lockdep: Reuse list entries that are no longer in use
locking/lockdep: Introduce lockdep_next_lockchain() and
lock_chain_count()
locking/lockdep: Fix a comment in add_chain_cache()
locking/lockdep: Reuse lock chains that have been freed
locking/lockdep: Check data structure consistency
locking/lockdep: Verify whether lock objects are small enough to be
used as class keys
locking/lockdep: Add support for dynamic keys
kernel/workqueue: Use dynamic lockdep keys for workqueues
locking/spinlock: Introduce spin_lock_init_key()
block: Avoid that flushing triggers a lockdep complaint
lockdep tests: Fix run_tests.sh
lockdep tests: Test dynamic key registration

block/blk-flush.c | 5 +-
block/blk.h | 1 +
include/linux/lockdep.h | 50 +-
include/linux/spinlock.h | 15 +
include/linux/workqueue.h | 28 +-
kernel/locking/lockdep.c | 887 +++++++++++++++---
kernel/locking/lockdep_internals.h | 3 +-
kernel/locking/lockdep_proc.c | 12 +-
kernel/workqueue.c | 59 +-
lib/locking-selftest.c | 2 +
tools/lib/lockdep/include/liblockdep/common.h | 2 +
tools/lib/lockdep/include/liblockdep/mutex.h | 11 +-
tools/lib/lockdep/run_tests.sh | 6 +-
tools/lib/lockdep/tests/ABBA.c | 9 +
14 files changed, 910 insertions(+), 180 deletions(-)

--
2.21.0.rc0.258.g878e2cd30e-goog