Re: [PATCH tip/core/rcu 1/3] rcu-tasks: *_ONCE() for rcu_tasks_cbs_head
From: Paul E. McKenney
Date: Tue Feb 18 2020 - 11:27:22 EST
On Tue, Feb 18, 2020 at 08:56:48AM +0100, Peter Zijlstra wrote:
> On Mon, Feb 17, 2020 at 10:16:16AM -0800, Paul E. McKenney wrote:
> > On Mon, Feb 17, 2020 at 01:38:51PM +0100, Peter Zijlstra wrote:
> > > On Fri, Feb 14, 2020 at 04:25:18PM -0800, paulmck@xxxxxxxxxx wrote:
> > > > From: "Paul E. McKenney" <paulmck@xxxxxxxxxx>
> > > >
> > > > The RCU tasks list of callbacks, rcu_tasks_cbs_head, is sampled locklessly
> > > > by rcu_tasks_kthread() when waiting for work to do. This commit therefore
> > > > applies READ_ONCE() to that lockless sampling and WRITE_ONCE() to the
> > > > single potential store outside of rcu_tasks_kthread.
> > > >
> > > > This data race was reported by KCSAN. Not appropriate for backporting
> > > > due to failure being unlikely.
> > >
> > > What failure is possible here? AFAICT this is (again) one of them
> > > load-complare-against-constant-discard patterns that are impossible to
> > > mess up.
> > First, please keep in mind that this is RCU code. Rather uncomplicated
> > for RCU, to be sure, but still RCU code.
> > The failure modes are thus as follows:
> > o I produce a patch for which KCSAN gives a legitimate warning,
> > but this warning is obscured by a pile of other warnings.
> > Yes, we should continue improving KCSAN's ability to adapt
> > to the users desired compiler-optimization risk level, but
> > in RCU's case that risk level is set quite low.
> > In RCU, what others are calling false positives are therefore
> > addressed. Yes, this does cost me a bit of work, but it is
> > trivial compared to the work required to track down a real bug.
> > o Someone optimizes or otherwise changes the wait/wakeup code,
> > which inadvertently gives the compiler more scope for mischief.
> > In short, within RCU, I am handling all KCSAN complaints. This is looking
> > to be an extremely inexpensive insurance policy for RCU. Other subsystems
> > are of course free to make their own tradeoffs, and subsystems having
> > less-aggressive concurrency control might be well-advised to take a
> > different path than the one I am taking.
> I just took offence at the Changelog wording. It seems to suggest there
> actually is a problem, there is not.
Quoting the changelog: "Not appropriate for backporting due to failure