[PATCH v2 18/35] rcu: force context-switch for PREEMPT_RCU=n, PREEMPT_COUNT=y

From: Ankur Arora
Date: Mon May 27 2024 - 20:39:31 EST


With (PREEMPT_RCU=n, PREEMPT_COUNT=y), rcu_flavor_sched_clock_irq()
registers urgently needed quiescent states when preempt_count() is
available and no task or softirq is in a non-preemptible section.

This, however, does nothing for long running loops where preemption
is only temporarily enabled, since the tick is unlikely to neatly fall
in the preemptible() section.

Handle that by forcing a context-switch when we require a quiescent
state urgently but are holding a preempt_count().

Cc: Paul E. McKenney <paulmck@xxxxxxxxxx>
Signed-off-by: Ankur Arora <ankur.a.arora@xxxxxxxxxx>
---
kernel/rcu/tree.c | 13 +++++++++++--
1 file changed, 11 insertions(+), 2 deletions(-)

diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index d9642dd06c25..3a0e1d0b939c 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -2286,8 +2286,17 @@ void rcu_sched_clock_irq(int user)
raw_cpu_inc(rcu_data.ticks_this_gp);
/* The load-acquire pairs with the store-release setting to true. */
if (smp_load_acquire(this_cpu_ptr(&rcu_data.rcu_urgent_qs))) {
- /* Idle and userspace execution already are quiescent states. */
- if (!rcu_is_cpu_rrupt_from_idle() && !user) {
+ /*
+ * Idle and userspace execution already are quiescent states.
+ * If, however, we came here from a nested interrupt in the
+ * kernel, or if we have PREEMPT_RCU=n but are holding a
+ * preempt_count() (say, with CONFIG_PREEMPT_AUTO=y), then
+ * force a context switch.
+ */
+ if ((!rcu_is_cpu_rrupt_from_idle() && !user) ||
+ ((!IS_ENABLED(CONFIG_PREEMPT_RCU) &&
+ IS_ENABLED(CONFIG_PREEMPT_COUNT)) &&
+ (preempt_count() & (PREEMPT_MASK | SOFTIRQ_MASK)))) {
set_tsk_need_resched(current);
set_preempt_need_resched();
}
--
2.31.1