Re: [PATCH v2 03/23] kcsan: Avoid checking scoped accesses from nested contexts

From: Marco Elver
Date: Mon Nov 29 2021 - 06:54:46 EST


On Mon, Nov 29, 2021 at 04:47PM +0800, Boqun Feng wrote:
> Hi Marco,
>
> On Thu, Nov 18, 2021 at 09:10:07AM +0100, Marco Elver wrote:
> > Avoid checking scoped accesses from nested contexts (such as nested
> > interrupts or in scheduler code) which share the same kcsan_ctx.
> >
> > This is to avoid detecting false positive races of accesses in the same
>
> Could you provide an example for a false positive?
>
> I think we do want to detect the following race:
>
> static int v = SOME_VALUE; // a percpu variable.
> static int other_v = ... ;
>
> void foo(..)
> {
> int tmp;
> int other_tmp;
>
> preempt_disable();
> {
> ASSERT_EXCLUSIVE_ACCESSS_SCOPED(v);
> tmp = v;
>
> other_tmp = other_v; // int_handler() may run here
>
> v = tmp + 2;
> }
> preempt_enabled();
> }
>
> void int_handler() // an interrupt handler
> {
> v++;
> }
>
> , if I understand correctly, we can detect this currently, but with this
> patch, we cannot detect this if the interrupt happens while we're doing
> the check for "other_tmp = other_v;", right? Of course, running tests
> multiple times may eventually catch this, but I just want to understand
> what's this patch for, thanks!

The above will still be detected. Task and interrupt contexts in this
case are distinct, i.e. kcsan_ctx differ (see get_ctx()).

But there are rare cases where kcsan_ctx is shared, such as nested
interrupts (NMI?), or when entering scheduler code -- which currently
has a KCSAN_SANITIZE := n, but I occasionally test it, which is how I
found this problem. The problem occurs frequently when enabling KCSAN in
kernel/sched and placing a random ASSERT_EXCLUSIVE_ACCESS_SCOPED() in
task context, or just enable "weak memory modeling" without this fix.
You also need CONFIG_PREEMPT=y + CONFIG_KCSAN_INTERRUPT_WATCHER=y.

The emphasis here really is on _shared kcsan_ctx_, which is not too
common. As noted in the commit description, we need to "[...] setting up
a watchpoint for a non-scoped (normal) access that also "conflicts" with
a current scoped access."

Consider this:

static int v;
int foo(..)
{
ASSERT_EXCLUSIVE_ACCESS_SCOPED(v);
v++; // preempted during watchpoint for 'v++'
}

Here we set up a scoped_access to be checked for v. Then on v++, a
watchpoint is set up for the normal access. While the watchpoint is set
up, the task is preempted and upon entering scheduler code, we're still
in_task() and 'current' is still the same, thus get_ctx() returns a
kcsan_ctx where the scoped_accesses list is non-empty containing the
scoped access for foo()'s ASSERT_EXCLUSIVE.

That means, when instrumenting scheduler code or any other code called
by scheduler code or nested interrupts (anything where get_ctx() still
returns the same as parent context), it'd now perform checks based on
the parent context's scoped access, and because the parent context also
has a watchpoint set up on the variable that conflicts with the scoped
access we'd report a nonsensical race.

This case is also possible:

static int v;
static int x;
int foo(..)
{
ASSERT_EXCLUSIVE_ACCESS_SCOPED(v);
x++; // preempted during watchpoint for 'v' after checking x++
}

Here, all we need is for the scoped access to be checked after x++, end
up with a watchpoint for it, then enter scheduler code, which then
checked 'v', sees the conflicting watchpoint, and reports a nonsensical
race again.

By disallowing scoped access checking for a kcsan_ctx, we simply make
sure that in such nested contexts where kcsan_ctx is shared, none of
these nonsensical races would be detected nor reported.

Hopefully that clarifies what this is about.

Thanks,
-- Marco