On Thu, Oct 31, 2019 at 06:47:31AM -0700, Paul E. McKenney wrote:
On Thu, Oct 31, 2019 at 10:07:57AM +0000, Lai Jiangshan wrote:
These is a possible bug (although which I can't triger yet)
since 2015 8203d6d0ee78
(rcu: Use single-stage IPI algorithm for RCU expedited grace period)
rcu_read_unlock()
->rcu_read_lock_nesting = -RCU_NEST_BIAS;
interrupt(); // before or after rcu_read_unlock_special()
rcu_read_lock()
fetch some rcu protected pointers
// exp GP starts in other cpu.
some works
NESTED interrupt for rcu_exp_handler();
Also, which platforms support nested interrupts? Last I knew, this was
prohibited.
report exp qs! BUG!
Why would a quiescent state for the expedited grace period be reported
here? This CPU is still in an RCU read-side critical section, isn't it?
And I now see what you were getting at here. Yes, the current code
assumes that interrupt-disabled regions, like hardware interrupt
handlers, cannot be interrupted. But if interrupt-disabled regions such
as hardware interrupt handlers can be interrupted (as opposed to being
NMIed), wouldn't that break a whole lot of stuff all over the place in
the kernel? So that sounds like an arch bug to me.
Thanx, Paul
// exp GP completes and pointers are freed in other cpu
some works with the pointers. BUG
rcu_read_unlock();
->rcu_read_lock_nesting = 0;
Although rcu_sched_clock_irq() can be in nested interrupt,
there is no such similar bug since special.b.need_qs
can only be set when ->rcu_read_lock_nesting > 0
Signed-off-by: Lai Jiangshan <laijs@xxxxxxxxxxxxxxxxx>
---
kernel/rcu/tree_exp.h | 5 +++--
kernel/rcu/tree_plugin.h | 9 ++++++---
2 files changed, 9 insertions(+), 5 deletions(-)
diff --git a/kernel/rcu/tree_exp.h b/kernel/rcu/tree_exp.h
index 6dec21909b30..c0d06bce35ea 100644
--- a/kernel/rcu/tree_exp.h
+++ b/kernel/rcu/tree_exp.h
@@ -664,8 +664,9 @@ static void rcu_exp_handler(void *unused)
* Otherwise, force a context switch after the CPU enables everything.
*/
rdp->exp_deferred_qs = true;
- if (!(preempt_count() & (PREEMPT_MASK | SOFTIRQ_MASK)) ||
- WARN_ON_ONCE(rcu_dynticks_curr_cpu_in_eqs())) {
+ if (rcu_preempt_need_deferred_qs(t) &&
+ (!(preempt_count() & (PREEMPT_MASK | SOFTIRQ_MASK)) ||
+ WARN_ON_ONCE(rcu_dynticks_curr_cpu_in_eqs()))) {
rcu_preempt_deferred_qs(t);
} else {
set_tsk_need_resched(t);
diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
index d4c482490589..59ef10da1e39 100644
--- a/kernel/rcu/tree_plugin.h
+++ b/kernel/rcu/tree_plugin.h
@@ -549,9 +549,12 @@ rcu_preempt_deferred_qs_irqrestore(struct task_struct *t, unsigned long flags)
*/
static bool rcu_preempt_need_deferred_qs(struct task_struct *t)
{
- return (__this_cpu_read(rcu_data.exp_deferred_qs) ||
- READ_ONCE(t->rcu_read_unlock_special.s)) &&
- t->rcu_read_lock_nesting <= 0;
+ return (__this_cpu_read(rcu_data.exp_deferred_qs) &&
+ (!t->rcu_read_lock_nesting ||
+ t->rcu_read_lock_nesting == -RCU_NEST_BIAS))
+ ||
+ (READ_ONCE(t->rcu_read_unlock_special.s) &&
+ t->rcu_read_lock_nesting <= 0);
}
/*
--
2.20.1