Re: [PATCH v2 3/9] rcu/sync: Remove custom check for reader-section
From: Paul E. McKenney
Date: Sat Jul 13 2019 - 04:23:50 EST
On Fri, Jul 12, 2019 at 11:10:08PM -0400, Joel Fernandes wrote:
> On Fri, Jul 12, 2019 at 11:01:50PM -0400, Joel Fernandes wrote:
> > On Fri, Jul 12, 2019 at 04:32:06PM -0700, Paul E. McKenney wrote:
> > > On Fri, Jul 12, 2019 at 05:35:59PM -0400, Joel Fernandes wrote:
> > > > On Fri, Jul 12, 2019 at 01:00:18PM -0400, Joel Fernandes (Google) wrote:
> > > > > The rcu/sync code was doing its own check whether we are in a reader
> > > > > section. With RCU consolidating flavors and the generic helper added in
> > > > > this series, this is no longer need. We can just use the generic helper
> > > > > and it results in a nice cleanup.
> > > > >
> > > > > Cc: Oleg Nesterov <oleg@xxxxxxxxxx>
> > > > > Signed-off-by: Joel Fernandes (Google) <joel@xxxxxxxxxxxxxxxxx>
> > > >
> > > > Hi Oleg,
> > > > Slightly unrelated to the patch,
> > > > I tried hard to understand this comment below in percpu_down_read() but no dice.
> > > >
> > > > I do understand how rcu sync and percpu rwsem works, however the comment
> > > > below didn't make much sense to me. For one, there's no readers_fast anymore
> > > > so I did not follow what readers_fast means. Could the comment be updated to
> > > > reflect latest changes?
> > > > Also could you help understand how is a writer not able to change
> > > > sem->state and count the per-cpu read counters at the same time as the
> > > > comment tries to say?
> > > >
> > > > /*
> > > > * We are in an RCU-sched read-side critical section, so the writer
> > > > * cannot both change sem->state from readers_fast and start checking
> > > > * counters while we are here. So if we see !sem->state, we know that
> > > > * the writer won't be checking until we're past the preempt_enable()
> > > > * and that once the synchronize_rcu() is done, the writer will see
> > > > * anything we did within this RCU-sched read-size critical section.
> > > > */
> > > >
> > > > Also,
> > > > I guess we could get rid of all of the gp_ops struct stuff now that since all
> > > > the callbacks are the same now. I will post that as a follow-up patch to this
> > > > series.
> > >
> > > Hello, Joel,
> > >
> > > Oleg has a set of patches updating this code that just hit mainline
> > > this week. These patches get rid of the code that previously handled
> > > RCU's multiple flavors. Or are you looking at current mainline and
> > > me just missing your point?
> > >
> >
> > Hi Paul,
> > You are right on point. I have a bad habit of not rebasing my trees. In this
> > case the feature branch of mine in concern was based on v5.1. Needless to
> > say, I need to rebase my tree.
> >
> > Yes, this sync clean up patch does conflict when I rebase, but other patches
> > rebase just fine.
> >
> > The 2 options I see are:
> > 1. Let us drop this patch for now and I resend it later.
> > 2. I resend all patches based on Linus's master branch.
>
> Below is the updated patch based on Linus master branch:
>
> ---8<-----------------------
>
> >From 5f40c9a07fcf3d6dafc2189599d0ba9443097d0f Mon Sep 17 00:00:00 2001
> From: "Joel Fernandes (Google)" <joel@xxxxxxxxxxxxxxxxx>
> Date: Fri, 12 Jul 2019 12:13:27 -0400
> Subject: [PATCH v2.1 3/9] rcu/sync: Remove custom check for reader-section
>
> The rcu/sync code was doing its own check whether we are in a reader
> section. With RCU consolidating flavors and the generic helper added in
> this series, this is no longer need. We can just use the generic helper
> and it results in a nice cleanup.
>
> Cc: Oleg Nesterov <oleg@xxxxxxxxxx>
> Signed-off-by: Joel Fernandes (Google) <joel@xxxxxxxxxxxxxxxxx>
> ---
> include/linux/rcu_sync.h | 4 +---
> 1 file changed, 1 insertion(+), 3 deletions(-)
>
> diff --git a/include/linux/rcu_sync.h b/include/linux/rcu_sync.h
> index 9b83865d24f9..0027d4c8087c 100644
> --- a/include/linux/rcu_sync.h
> +++ b/include/linux/rcu_sync.h
> @@ -31,9 +31,7 @@ struct rcu_sync {
> */
> static inline bool rcu_sync_is_idle(struct rcu_sync *rsp)
> {
> - RCU_LOCKDEP_WARN(!rcu_read_lock_held() &&
> - !rcu_read_lock_bh_held() &&
> - !rcu_read_lock_sched_held(),
> + RCU_LOCKDEP_WARN(!rcu_read_lock_any_held(),
I believe that replacing rcu_read_lock_sched_held() with preemptible()
in a CONFIG_PREEMPT=n kernel will give you false-positive splats here.
If you have not already done so, could you please give it a try?
Thanx, Paul
> "suspicious rcu_sync_is_idle() usage");
> return !READ_ONCE(rsp->gp_state); /* GP_IDLE */
> }
> --
> 2.22.0.510.g264f2c817a-goog
>