Re: [PATCH RFC v2 tip/core/rcu 01/22] sched/core: Add function to sample state of locked-down task

From: Paul E. McKenney
Date: Tue Mar 24 2020 - 13:20:29 EST


On Tue, Mar 24, 2020 at 12:52:55PM -0400, Joel Fernandes wrote:
> On Tue, Mar 24, 2020 at 08:48:22AM -0700, Paul E. McKenney wrote:
> [..]
> > >
> > > > diff --git a/kernel/rcu/tree.h b/kernel/rcu/tree.h
> > > > index 44edd0a..43991a4 100644
> > > > --- a/kernel/rcu/tree.h
> > > > +++ b/kernel/rcu/tree.h
> > > > @@ -455,6 +455,8 @@ static void rcu_bind_gp_kthread(void);
> > > > static bool rcu_nohz_full_cpu(void);
> > > > static void rcu_dynticks_task_enter(void);
> > > > static void rcu_dynticks_task_exit(void);
> > > > +static void rcu_dynticks_task_trace_enter(void);
> > > > +static void rcu_dynticks_task_trace_exit(void);
> > > >
> > > > /* Forward declarations for tree_stall.h */
> > > > static void record_gp_stall_check_time(void);
> > > > diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
> > > > index 9355536..f4a344e 100644
> > > > --- a/kernel/rcu/tree_plugin.h
> > > > +++ b/kernel/rcu/tree_plugin.h
> > > > @@ -2553,3 +2553,21 @@ static void rcu_dynticks_task_exit(void)
> > > > WRITE_ONCE(current->rcu_tasks_idle_cpu, -1);
> > > > #endif /* #if defined(CONFIG_TASKS_RCU) && defined(CONFIG_NO_HZ_FULL) */
> > > > }
> > > > +
> > > > +/* Turn on heavyweight RCU tasks trace readers on idle/user entry. */
> > > > +static void rcu_dynticks_task_trace_enter(void)
> > > > +{
> > > > +#ifdef CONFIG_TASKS_RCU_TRACE
> > > > + if (IS_ENABLED(CONFIG_TASKS_TRACE_RCU_READ_MB))
> > > > + current->trc_reader_special.b.need_mb = true;
> > >
> > > If this is every called from middle of a reader section (that is we
> > > transition from IPI-mode to using heavier reader-sections), then is a memory
> > > barrier needed here just to protect the reader section that already started?
> >
> > That memory barrier is provided by the memory ordering in the callers
> > of rcu_dynticks_task_trace_enter() and rcu_dynticks_task_trace_exit(),
> > namely, those callers' atomic_add_return() invocations. These barriers
> > pair with the pair of smp_rmb() calls in rcu_dynticks_zero_in_eqs(),
> > which is in turn invoked from the function formerly known as
> > trc_inspect_reader_notrunning(), AKA trc_inspect_reader().
> >
> > This same pair of smp_rmb() calls also pair with the conditional smp_mb()
> > calls in rcu_read_lock_trace() and rcu_read_unlock_trace().
> >
> > In your scenario, the calls in rcu_read_lock_trace() and
> > rcu_read_unlock_trace() wouldn't happen, but in that case the ordering
> > from atomic_add_return() would suffice.
> >
> > Does that work? Or is there an ordering bug in there somewhere?
>
> Thanks for explaining. Could the following scenario cause a problem?
>
> If we consider the litmus test:
>
> {
> int x = 1;
> int *y = &x;
> int z = 1;
> }
>
> P0(int *x, int *z, int **y)
> {
> int *r0;
> int r1;
>
> dynticks_eqs_trace_enter();
>
> rcu_read_lock();
> r0 = rcu_dereference(*y);
>
> dynticks_eqs_trace_exit(); // cut-off reader's mb wings :)

RCU Tasks Trace currently assumes that a reader will not start within
idle and end outside of idle. However, keep in mind that eqs exit
implies a full memory barrier and changes the ->dynticks counter.
The call to rcu_dynticks_task_trace_exit() is not standalone. Instead,
the atomic_add_return() immediately preceding that call is critically
important. And ditto for rcu_dynticks_task_trace_enter() and the
atomic_add_return() immediately following it.

The overall effect is similar to that of sequence locks.

> r1 = READ_ONCE(*r0); // Reordering of this beyond the unlock() is bad.
> rcu_read_unlock();
> }
>
> P1(int *x, int *z, int **y)
> {
> rcu_assign_pointer(*y, z);
> synchronize_rcu();
> WRITE_ONCE(*x, 0);
> }
>
> exists (0:r0=x /\ 0:r1=0)
>
> Then the following situation can happen?
>
> READER UPDATER
>
> y = &z;
>
> eqs_enter(); // full-mb
>
> rcu_read_lock(); // full-mb
> // r0 = x;
> // GP-start
> // ..zero_in_eqs() notices eqs, no IPI
> eqs_exit(); // full-mb
>
> // actual r1 = *x but will reorder
>
> rcu_read_unlock(); // no-mb
> // GP-finish as notices nesting = 0
> x = 0;

Followed by an smp_rmb() followed the second read of ->dynticks, which
will see a non-zero bottom bit for ->dynticks, and thus return false.
This in turn will cause the possible zero nesting counter to be ignored.

> // reordered r1 = *x = 0;
>
>
> Basically r0=x /\ r1=0 happened because r1=0. Or did I miss something that
> prevents it?

Yes, the change in value of ->dynticks and the full ordering associated
with the atomic_add_return() that makes this change.

Thanx, Paul

> thanks,
>
> - Joel
>
>
>
>
> > > thanks,
> > >
> > > - Joel
> > >
> > >
> > > > +#endif /* #ifdef CONFIG_TASKS_RCU_TRACE */
> > > > +}
> > > > +
> > > > +/* Turn off heavyweight RCU tasks trace readers on idle/user exit. */
> > > > +static void rcu_dynticks_task_trace_exit(void)
> > > > +{
> > > > +#ifdef CONFIG_TASKS_RCU_TRACE
> > > > + if (IS_ENABLED(CONFIG_TASKS_TRACE_RCU_READ_MB))
> > > > + current->trc_reader_special.b.need_mb = false;
> > > > +#endif /* #ifdef CONFIG_TASKS_RCU_TRACE */
> > > > +}