[PATCH v3 30/30] locking/lockdep: Remove irq-safe to irq-unsafe read check
From: Yuyang Du
Date: Fri Jun 28 2019 - 05:18:06 EST
We have a lockdep warning:
========================================================
WARNING: possible irq lock inversion dependency detected
5.1.0-rc7+ #141 Not tainted
--------------------------------------------------------
kworker/8:2/328 just changed the state of lock:
0000000007f1a95b (&(&host->lock)->rlock){-...}, at: ata_bmdma_interrupt+0x27/0x1c0 [libata]
but this lock took another, HARDIRQ-READ-unsafe lock in the past:
(&trig->leddev_list_lock){.+.?}
and interrupts could create inverse lock ordering between them.
other info that might help us debug this:
Possible interrupt unsafe locking scenario:
CPU0 CPU1
---- ----
lock(&trig->leddev_list_lock);
local_irq_disable();
lock(&(&host->lock)->rlock);
lock(&trig->leddev_list_lock);
<Interrupt>
lock(&(&host->lock)->rlock);
*** DEADLOCK ***
This splat is a false positive, which is enabled by the addition of
recursive read locks in the graph. Specifically, trig->leddev_list_lock is a
rwlock_t type, which was not in the graph before recursive read lock support
was added in lockdep.
This false positve is caused by a "false-positive" check in IRQ usage check.
In mark_lock_irq(), the following checks are currently performed:
----------------------------------
| -> | unsafe | read unsafe |
|----------------------------------|
| safe | F B | F* B* |
|----------------------------------|
| read safe | F* B* | - |
----------------------------------
Where:
F: check_usage_forwards
B: check_usage_backwards
*: check enabled by STRICT_READ_CHECKS
But actually the safe -> unsafe read dependency does not create a deadlock
scenario.
Fix this by simply removing those two checks, and since safe read -> unsafe
is indeed a problem, these checks are not actually strict per se, so remove
the macro STRICT_READ_CHECKS, and we have the following checks:
----------------------------------
| -> | unsafe | read unsafe |
|----------------------------------|
| safe | F B | - |
|----------------------------------|
| read safe | F B | - |
----------------------------------
Signed-off-by: Yuyang Du <duyuyang@xxxxxxxxx>
---
kernel/locking/lockdep.c | 6 ++----
1 file changed, 2 insertions(+), 4 deletions(-)
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index c7ba647..d12ab0e 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -3558,8 +3558,6 @@ static int SOFTIRQ_verbose(struct lock_class *class)
return 0;
}
-#define STRICT_READ_CHECKS 1
-
static int (*state_verbose_f[])(struct lock_class *class) = {
#define LOCKDEP_STATE(__STATE) \
__STATE##_verbose,
@@ -3605,7 +3603,7 @@ typedef int (*check_usage_f)(struct task_struct *, struct held_lock *,
* Validate that the lock dependencies don't have conflicting usage
* states.
*/
- if ((!read || STRICT_READ_CHECKS) &&
+ if ((!read || !dir) &&
!usage(curr, this, excl_bit, state_name(new_bit & ~LOCK_USAGE_READ_MASK)))
return 0;
@@ -3616,7 +3614,7 @@ typedef int (*check_usage_f)(struct task_struct *, struct held_lock *,
if (!valid_state(curr, this, new_bit, excl_bit + LOCK_USAGE_READ_MASK))
return 0;
- if (STRICT_READ_CHECKS &&
+ if (dir &&
!usage(curr, this, excl_bit + LOCK_USAGE_READ_MASK,
state_name(new_bit + LOCK_USAGE_READ_MASK)))
return 0;
--
1.8.3.1