Re: [PATCH V2 2/7] rcu: cleanup rcu_preempt_deferred_qs()

From: Paul E. McKenney
Date: Tue Nov 05 2019 - 02:19:25 EST


On Tue, Nov 05, 2019 at 10:09:15AM +0800, Lai Jiangshan wrote:
> On 2019/11/4 10:55 äå, Paul E. McKenney wrote:
> > On Sun, Nov 03, 2019 at 01:01:21PM +0800, Lai Jiangshan wrote:
> > >
> > >
> > > On 2019/11/3 10:01 äå, Boqun Feng wrote:
> > > > Hi Jiangshan,
> > > >
> > > >
> > > > I haven't checked the correctness of this patch carefully, but..
> > > >
> > > >
> > > > On Sat, Nov 02, 2019 at 12:45:54PM +0000, Lai Jiangshan wrote:
> > > > > Don't need to set ->rcu_read_lock_nesting negative, irq-protected
> > > > > rcu_preempt_deferred_qs_irqrestore() doesn't expect
> > > > > ->rcu_read_lock_nesting to be negative to work, it even
> > > > > doesn't access to ->rcu_read_lock_nesting any more.
> > > >
> > > > rcu_preempt_deferred_qs_irqrestore() will report RCU qs, and may
> > > > eventually call swake_up() or its friends to wake up, say, the gp
> > > > kthread, and the wake up functions could go into the scheduler code
> > > > path which might have RCU read-side critical section in it, IOW,
> > > > accessing ->rcu_read_lock_nesting.
> > >
> > > Sure, thank you for pointing it out.
> > >
> > > I should rewrite the changelog in next round. Like this:
> > >
> > > rcu: cleanup rcu_preempt_deferred_qs()
> > >
> > > IRQ-protected rcu_preempt_deferred_qs_irqrestore() itself doesn't
> > > expect ->rcu_read_lock_nesting to be negative to work.
> > >
> > > There might be RCU read-side critical section in it (from wakeup()
> > > or so), 1711d15bf5ef(rcu: Clear ->rcu_read_unlock_special only once)
> > > will ensure that ->rcu_read_unlock_special is zero and these RCU
> > > read-side critical sections will not call rcu_read_unlock_special().
> > >
> > > Thanks
> > > Lai
> > >
> > > ===
> > > PS: Were 1711d15bf5ef(rcu: Clear ->rcu_read_unlock_special only once)
> > > not applied earlier, it will be protected by previous patch (patch1)
> > > in this series
> > > "rcu: use preempt_count to test whether scheduler locks is held"
> > > when rcu_read_unlock_special() is called.
> >
> > This one in -rcu, you mean?
> >
> > 5c5d9065e4eb ("rcu: Clear ->rcu_read_unlock_special only once")
>
> Yes, but the commit ID is floating in the tree.

Indeed, that part of -rcu is subject to rebase, and will continue
to be until about v5.5-rc5 or thereabouts.

https://mirrors.edge.kernel.org/pub/linux/kernel/people/paulmck/rcutodo.html

My testing of your full stack should be complete by this coming Sunday
morning, Pacific Time.

> > Some adjustment was needed due to my not applying the earlier patches
> > that assumed nested interrupts. Please let me know if further adjustments
> > are needed.
>
> I don't think the earlier patches are needed. If the possible? nested
> interrupts described in my previous emails is an issue, the patch
> "rcu: don't use negative ->rcu_read_lock_nesting" in this
> series is enough to fixed it. If any adjustments needed for
> this series, I will just put the adjustments the series.

Fair enough. Please to clearly mark any adjustments so that I can
merge them into current commits as appropriate. This help bisectability
later on.

Thanx, Paul

> Thanks
> Lai
>
> >
> > Thanx, Paul
> >
> > > > Again, haven't checked closely, but this argument in the commit log
> > > > seems untrue.
> > > >
> > > > Regards,
> > > > Boqun
> > > >
> > > > >
> > > > > It is true that NMI over rcu_preempt_deferred_qs_irqrestore()
> > > > > may access to ->rcu_read_lock_nesting, but it is still safe
> > > > > since rcu_read_unlock_special() can protect itself from NMI.
> > > > >
> > > > > Signed-off-by: Lai Jiangshan <laijs@xxxxxxxxxxxxxxxxx>
> > > > > ---
> > > > > kernel/rcu/tree_plugin.h | 5 -----
> > > > > 1 file changed, 5 deletions(-)
> > > > >
> > > > > diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
> > > > > index aba5896d67e3..2fab8be2061f 100644
> > > > > --- a/kernel/rcu/tree_plugin.h
> > > > > +++ b/kernel/rcu/tree_plugin.h
> > > > > @@ -552,16 +552,11 @@ static bool rcu_preempt_need_deferred_qs(struct task_struct *t)
> > > > > static void rcu_preempt_deferred_qs(struct task_struct *t)
> > > > > {
> > > > > unsigned long flags;
> > > > > - bool couldrecurse = t->rcu_read_lock_nesting >= 0;
> > > > > if (!rcu_preempt_need_deferred_qs(t))
> > > > > return;
> > > > > - if (couldrecurse)
> > > > > - t->rcu_read_lock_nesting -= RCU_NEST_BIAS;
> > > > > local_irq_save(flags);
> > > > > rcu_preempt_deferred_qs_irqrestore(t, flags);
> > > > > - if (couldrecurse)
> > > > > - t->rcu_read_lock_nesting += RCU_NEST_BIAS;
> > > > > }
> > > > > /*
> > > > > --
> > > > > 2.20.1
> > > > >