Re: [PATCH 06/11] rcu: clear t->rcu_read_unlock_special in one go

From: Paul E. McKenney
Date: Fri Nov 01 2019 - 12:58:49 EST


On Fri, Nov 01, 2019 at 05:10:56AM -0700, Paul E. McKenney wrote:
> On Thu, Oct 31, 2019 at 10:08:01AM +0000, Lai Jiangshan wrote:
> > Clearing t->rcu_read_unlock_special in one go makes the code
> > more clearly.
> >
> > Signed-off-by: Lai Jiangshan <laijs@xxxxxxxxxxxxxxxxx>
>
> Nice simplification! I had to hand-apply it due to not having taken the
> earlier patches, plus I redid the commit log. Could you please check
> the version shown below?

Except that this simplification depends on having moved the check of
(!t->rcu_read_unlock_special.s && !rdp->exp_deferred_qs) early, and
thus results in rcutorture failures due to tasks failing to be dequeued.

>From what I can see, the only early exit that matters is the first one,
so I am simply removing those within the "if" statements.

Thanx, Paul

> ------------------------------------------------------------------------
>
> commit 0bef7971edbbd35ed4d1682a465f682077981e85
> Author: Lai Jiangshan <laijs@xxxxxxxxxxxxxxxxx>
> Date: Fri Nov 1 05:06:21 2019 -0700
>
> rcu: Clear ->rcu_read_unlock_special only once
>
> In rcu_preempt_deferred_qs_irqrestore(), ->rcu_read_unlock_special is
> cleared one piece at a time. Given that the "if" statements in this
> function use the copy in "special", this commit removes the clearing
> of the individual pieces in favor of clearing ->rcu_read_unlock_special
> in one go just after it has been determined to be non-zero.
>
> Signed-off-by: Lai Jiangshan <laijs@xxxxxxxxxxxxxxxxx>
> Signed-off-by: Paul E. McKenney <paulmck@xxxxxxxxxx>
>
> diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
> index 8d0e8c1..d113923 100644
> --- a/kernel/rcu/tree_plugin.h
> +++ b/kernel/rcu/tree_plugin.h
> @@ -444,11 +444,9 @@ rcu_preempt_deferred_qs_irqrestore(struct task_struct *t, unsigned long flags)
> local_irq_restore(flags);
> return;
> }
> - t->rcu_read_unlock_special.b.exp_hint = false;
> - t->rcu_read_unlock_special.b.deferred_qs = false;
> + t->rcu_read_unlock_special.s = 0;
> if (special.b.need_qs) {
> rcu_qs();
> - t->rcu_read_unlock_special.b.need_qs = false;
> if (!t->rcu_read_unlock_special.s && !rdp->exp_deferred_qs) {
> local_irq_restore(flags);
> return;
> @@ -471,7 +469,6 @@ rcu_preempt_deferred_qs_irqrestore(struct task_struct *t, unsigned long flags)
>
> /* Clean up if blocked during RCU read-side critical section. */
> if (special.b.blocked) {
> - t->rcu_read_unlock_special.b.blocked = false;
>
> /*
> * Remove this task from the list it blocked on. The task