[RFC tip/locking/lockdep v3 08/14] lockdep: Fix recursive read lock related safe->unsafe detection

From: Boqun Feng
Date: Mon Sep 25 2017 - 18:19:14 EST


There are four cases for recursive read lock realted deadlocks:

(--(X..Y)--> means a strong dependency path starts with a --(X*)-->
dependency and ends with a --(*Y)-- dependency.)

1. An irq-safe lock L1 has a dependency --(*..*)--> to an
irq-unsafe lock L2.

2. An irq-read-safe lock L1 has a dependency --(N..*)--> to an
irq-unsafe lock L2.

3. An irq-safe lock L1 has a dependency --(*..N)--> to an
irq-read-unsafe lock L2.

4. An irq-read-safe lock L1 has a dependency --(N..N)--> to an
irq-read-unsafe lock L2.

The current check_usage() only checks 1) and 2), so this patch adds
checks for 3) and 4) and makes sure when find_usage_{back,for}wards find
an irq-read-{,un}safe lock, the traverse path should ends at a
dependency --(*N)-->. Note when we search backwards, --(*N)--> indicates
a real dependency --(N*)-->.

Signed-off-by: Boqun Feng <boqun.feng@xxxxxxxxx>
---
kernel/locking/lockdep.c | 17 ++++++++++++++++-
1 file changed, 16 insertions(+), 1 deletion(-)

diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index f55c9012025e..c29b058c37b3 100644
--- a/kernel/locking/lockdep.c
+++ b/kernel/locking/lockdep.c
@@ -1505,7 +1505,14 @@ check_redundant(struct lock_list *root, struct held_lock *target,

static inline int usage_match(struct lock_list *entry, void *bit)
{
- return entry->class->usage_mask & (1 << (enum lock_usage_bit)bit);
+ enum lock_usage_bit ub = (enum lock_usage_bit)bit;
+
+
+ if (ub & 1)
+ return entry->class->usage_mask & (1 << ub) &&
+ !entry->is_rr;
+ else
+ return entry->class->usage_mask & (1 << ub);
}


@@ -1816,6 +1823,10 @@ static int check_irq_usage(struct task_struct *curr, struct held_lock *prev,
exclusive_bit(bit), state_name(bit)))
return 0;

+ if (!check_usage(curr, prev, next, bit,
+ exclusive_bit(bit) + 1, state_name(bit)))
+ return 0;
+
bit++; /* _READ */

/*
@@ -1828,6 +1839,10 @@ static int check_irq_usage(struct task_struct *curr, struct held_lock *prev,
exclusive_bit(bit), state_name(bit)))
return 0;

+ if (!check_usage(curr, prev, next, bit,
+ exclusive_bit(bit) + 1, state_name(bit)))
+ return 0;
+
return 1;
}

--
2.14.1