Re: [PATCH V2 6/7] rcu: clear the special.b.need_qs in rcu_note_context_switch()
From: Paul E. McKenney
Date: Sat Nov 16 2019 - 10:47:16 EST
On Sat, Nov 02, 2019 at 12:45:58PM +0000, Lai Jiangshan wrote:
> It is better to clear the special.b.need_qs when it is
> replaced by special.b.blocked or it is really fulfill its
> goal in rcu_preempt_deferred_qs_irqrestore().
>
> It makes rcu_qs() easier to be understood, and also prepares for
> the percpu rcu_preempt_depth patch, which reqires rcu_special
> bits to be clearred in irq-disable context.
>
> Signed-off-by: Lai Jiangshan <laijs@xxxxxxxxxxxxxxxxx>
This one is a (possible) optimization.
Right now, when the CPU actually passes through the quiescent state,
we clear this field. The quiescent state is not reported until later.
Waiting to clear it until later might cause extra unneeded quiescent-state
work to happen. But your point is that the current code might leave
->rcu_read_unlock_special momentarily zero, causing possible trouble
with the remainder of this series, right?
Hmmm...
The "right" way to do this would be to have another flag saying
"quiescent state encountered but not yet reported". This would keep
->rcu_read_unlock_special non-zero throughout, and would avoid useless
work looking for additional unneeded quiescent states. Or perhaps have
need_qs be zero for don't need, one for need, and two for have but not
yet reported.
Thoughts? Other approaches?
Thanx, Paul
> ---
> kernel/rcu/tree_plugin.h | 3 +--
> 1 file changed, 1 insertion(+), 2 deletions(-)
>
> diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
> index eb5906c55c8d..e16c3867d2ff 100644
> --- a/kernel/rcu/tree_plugin.h
> +++ b/kernel/rcu/tree_plugin.h
> @@ -264,8 +264,6 @@ static void rcu_qs(void)
> __this_cpu_read(rcu_data.gp_seq),
> TPS("cpuqs"));
> __this_cpu_write(rcu_data.cpu_no_qs.b.norm, false);
> - barrier(); /* Coordinate with rcu_flavor_sched_clock_irq(). */
> - WRITE_ONCE(current->rcu_read_unlock_special.b.need_qs, false);
> }
> }
>
> @@ -297,6 +295,7 @@ void rcu_note_context_switch(bool preempt)
> /* Possibly blocking in an RCU read-side critical section. */
> rnp = rdp->mynode;
> raw_spin_lock_rcu_node(rnp);
> + t->rcu_read_unlock_special.b.need_qs = false;
> t->rcu_read_unlock_special.b.blocked = true;
> t->rcu_blocked_node = rnp;
>
> --
> 2.20.1
>