[RFC tip/locking/lockdep v4 08/17] lockdep: Fix recursive read lock related safe->unsafe detection

From: Boqun Feng
Date: Tue Jan 09 2018 - 09:37:29 EST


There are four cases for recursive read lock realted deadlocks:

(--(X..Y)--> means a strong dependency path starts with a --(X*)-->
dependency and ends with a --(*Y)-- dependency.)

1. An irq-safe lock L1 has a dependency --(*..*)--> to an
irq-unsafe lock L2.

2. An irq-read-safe lock L1 has a dependency --(N..*)--> to an
irq-unsafe lock L2.

3. An irq-safe lock L1 has a dependency --(*..N)--> to an
irq-read-unsafe lock L2.

4. An irq-read-safe lock L1 has a dependency --(N..N)--> to an
irq-read-unsafe lock L2.

The current check_usage() only checks 1) and 2), so this patch adds
checks for 3) and 4) and makes sure when find_usage_{back,for}wards find
an irq-read-{,un}safe lock, the traverse path should ends at a
dependency --(*N)-->. Note when we search backwards, --(*N)--> indicates
a real dependency --(N*)-->.

Signed-off-by: Boqun Feng <boqun.feng@xxxxxxxxx>
---
kernel/locking/lockdep.c | 17 ++++++++++++++++-
1 file changed, 16 insertions(+), 1 deletion(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index 4655219c28c1..c7b1273a044a 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -1499,7 +1499,14 @@ check_redundant(struct lock_list *root, struct held_lock *target,

static inline int usage_match(struct lock_list *entry, void *bit)
{
- return entry->class->usage_mask & (1 << (enum lock_usage_bit)bit);
+ enum lock_usage_bit ub = (enum lock_usage_bit)bit;
+
+
+ if (ub & 1)
+ return entry->class->usage_mask & (1 << ub) &&
+ !entry->is_rr;
+ else
+ return entry->class->usage_mask & (1 << ub);
}


@@ -1810,6 +1817,10 @@ static int check_irq_usage(struct task_struct *curr, struct held_lock *prev,
exclusive_bit(bit), state_name(bit)))
return 0;

+ if (!check_usage(curr, prev, next, bit,
+ exclusive_bit(bit) + 1, state_name(bit)))
+ return 0;
+
bit++; /* _READ */

/*
@@ -1822,6 +1833,10 @@ static int check_irq_usage(struct task_struct *curr, struct held_lock *prev,
exclusive_bit(bit), state_name(bit)))
return 0;

+ if (!check_usage(curr, prev, next, bit,
+ exclusive_bit(bit) + 1, state_name(bit)))
+ return 0;
+
return 1;
}

--
2.15.1