On Mon, Jun 22, 2020 at 11:37:03PM +0530, Neeraj Upadhyay wrote:
Remove CONFIG_PREMPT_RCU check in force_qs_rnp(). Originally,
this check was required to skip executing fqs failsafe
for rcu-sched, which was added in commit a77da14ce9af ("rcu:
Yet another fix for preemption and CPU hotplug"). However,
this failsafe has been removed, since then. So, cleanup the
code to avoid any confusion around the need for boosting,
for !CONFIG_PREMPT_RCU.
Signed-off-by: Neeraj Upadhyay <neeraju@xxxxxxxxxxxxxx>
Good point, there is a !PREEMPT definition of the function
rcu_preempt_blocked_readers_cgp() that unconditionally returns zero.
And if !PREEMPT kernels, the same things happens in the "if"
body as after it, so behavior is not changed.
I have queued and pushed this with an upgraded commit log as
shown below.
Thanx, Paul
---
kernel/rcu/tree.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 6226bfb..57c904b 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -2514,8 +2514,7 @@ static void force_qs_rnp(int (*f)(struct rcu_data *rdp))
raw_spin_lock_irqsave_rcu_node(rnp, flags);
rcu_state.cbovldnext |= !!rnp->cbovldmask;
if (rnp->qsmask == 0) {
- if (!IS_ENABLED(CONFIG_PREEMPT_RCU) ||
- rcu_preempt_blocked_readers_cgp(rnp)) {
+ if (rcu_preempt_blocked_readers_cgp(rnp)) {
/*
* No point in scanning bits because they
* are all zero. But we might need to
--
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project
------------------------------------------------------------------------
commit a4600389c35010aef414b89e2817d4a527e751b5
Author: Neeraj Upadhyay <neeraju@xxxxxxxxxxxxxx>
Date: Mon Jun 22 23:37:03 2020 +0530
rcu/tree: Remove CONFIG_PREMPT_RCU check in force_qs_rnp()
Originally, the call to rcu_preempt_blocked_readers_cgp() from
force_qs_rnp() had to be conditioned on CONFIG_PREEMPT_RCU=y, as in
commit a77da14ce9af ("rcu: Yet another fix for preemption and CPU
hotplug"). However, there is now a CONFIG_PREEMPT_RCU=n definition of
rcu_preempt_blocked_readers_cgp() that unconditionally returns zero, so
invoking it is now safe. In addition, the CONFIG_PREEMPT_RCU=n definition
of rcu_initiate_boost() simply releases the rcu_node structure's ->lock,
which is what happens when the "if" condition evaluates to false.
This commit therefore drops the IS_ENABLED(CONFIG_PREEMPT_RCU) check,
so that rcu_initiate_boost() is called only in CONFIG_PREEMPT_RCU=y
kernels when there are readers blocking the current grace period.
This does not change the behavior, but reduces code-reader confusion by
eliminating non-CONFIG_PREEMPT_RCU=y calls to rcu_initiate_boost().
Signed-off-by: Neeraj Upadhyay <neeraju@xxxxxxxxxxxxxx>
Signed-off-by: Paul E. McKenney <paulmck@xxxxxxxxxx>
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 6226bfb..57c904b 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -2514,8 +2514,7 @@ static void force_qs_rnp(int (*f)(struct rcu_data *rdp))
raw_spin_lock_irqsave_rcu_node(rnp, flags);
rcu_state.cbovldnext |= !!rnp->cbovldmask;
if (rnp->qsmask == 0) {
- if (!IS_ENABLED(CONFIG_PREEMPT_RCU) ||
- rcu_preempt_blocked_readers_cgp(rnp)) {
+ if (rcu_preempt_blocked_readers_cgp(rnp)) {
/*
* No point in scanning bits because they
* are all zero. But we might need to