Re: [PATCH 02/11] rcu: fix bug when rcu_exp_handler() in nested interrupt

From: Lai Jiangshan
Date: Thu Oct 31 2019 - 10:21:34 EST




On 2019/10/31 9:47 äå, Paul E. McKenney wrote:
On Thu, Oct 31, 2019 at 10:07:57AM +0000, Lai Jiangshan wrote:
These is a possible bug (although which I can't triger yet)
since 2015 8203d6d0ee78
(rcu: Use single-stage IPI algorithm for RCU expedited grace period)

rcu_read_unlock()
->rcu_read_lock_nesting = -RCU_NEST_BIAS;
interrupt(); // before or after rcu_read_unlock_special()
rcu_read_lock()
fetch some rcu protected pointers
// exp GP starts in other cpu.
some works
NESTED interrupt for rcu_exp_handler();
report exp qs! BUG!

Why would a quiescent state for the expedited grace period be reported
here? This CPU is still in an RCU read-side critical section, isn't it?

Thanx, Paul

Remember, the ->rcu_read_lock_nesting is -RCU_NEST_BIAS + 1 now.
In for rcu_exp_handler(), it goes into this branch:

rdp->exp_deferred_qs = true;
if (!(preempt_count() & (PREEMPT_MASK | SOFTIRQ_MASK)) ||
WARN_ON_ONCE(rcu_dynticks_curr_cpu_in_eqs())) {
rcu_preempt_deferred_qs(t);
} else {

and rcu_preempt_deferred_qs(t) report the QS no matter what
the value of ->rcu_read_lock_nesting is if it is negative,
in other words, "-RCU_NEST_BIAS + 1" is not different from
"-RCU_NEST_BIAS"


// exp GP completes and pointers are freed in other cpu
some works with the pointers. BUG
rcu_read_unlock();
->rcu_read_lock_nesting = 0;

Although rcu_sched_clock_irq() can be in nested interrupt,
there is no such similar bug since special.b.need_qs
can only be set when ->rcu_read_lock_nesting > 0

Signed-off-by: Lai Jiangshan <laijs@xxxxxxxxxxxxxxxxx>
---
kernel/rcu/tree_exp.h | 5 +++--
kernel/rcu/tree_plugin.h | 9 ++++++---
2 files changed, 9 insertions(+), 5 deletions(-)

diff --git a/kernel/rcu/tree_exp.h b/kernel/rcu/tree_exp.h
index 6dec21909b30..c0d06bce35ea 100644
--- a/kernel/rcu/tree_exp.h
+++ b/kernel/rcu/tree_exp.h
@@ -664,8 +664,9 @@ static void rcu_exp_handler(void *unused)
* Otherwise, force a context switch after the CPU enables everything.
*/
rdp->exp_deferred_qs = true;
- if (!(preempt_count() & (PREEMPT_MASK | SOFTIRQ_MASK)) ||
- WARN_ON_ONCE(rcu_dynticks_curr_cpu_in_eqs())) {
+ if (rcu_preempt_need_deferred_qs(t) &&
+ (!(preempt_count() & (PREEMPT_MASK | SOFTIRQ_MASK)) ||
+ WARN_ON_ONCE(rcu_dynticks_curr_cpu_in_eqs()))) {
rcu_preempt_deferred_qs(t);
} else {
set_tsk_need_resched(t);
diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
index d4c482490589..59ef10da1e39 100644
--- a/kernel/rcu/tree_plugin.h
+++ b/kernel/rcu/tree_plugin.h
@@ -549,9 +549,12 @@ rcu_preempt_deferred_qs_irqrestore(struct task_struct *t, unsigned long flags)
*/
static bool rcu_preempt_need_deferred_qs(struct task_struct *t)
{
- return (__this_cpu_read(rcu_data.exp_deferred_qs) ||
- READ_ONCE(t->rcu_read_unlock_special.s)) &&
- t->rcu_read_lock_nesting <= 0;
+ return (__this_cpu_read(rcu_data.exp_deferred_qs) &&
+ (!t->rcu_read_lock_nesting ||
+ t->rcu_read_lock_nesting == -RCU_NEST_BIAS))
+ ||
+ (READ_ONCE(t->rcu_read_unlock_special.s) &&
+ t->rcu_read_lock_nesting <= 0);
}
/*
--
2.20.1