Re: Adding plain accesses and detecting data races in the LKMM
From: Andrea Parri
Date: Thu Apr 18 2019 - 08:54:23 EST
> Another question is "should the kernel permit smp_mb__{before,after}*()
> anywhere other than immediately before or after the primitive being
> strengthened?"
Mmh, I do think that keeping these barriers "immediately before or after
the primitive being strengthened" is a good practice (readability, and
all that), if this is what you're suggesting.
However, a first auditing of the callsites showed that this practice is
in fact not always applied, notably... ;-)
kernel/rcu/tree_exp.h:sync_exp_work_done
kernel/sched/cpupri.c:cpupri_set
So there appear, at least, to be some exceptions/reasons for not always
following it? Thoughts?
BTW, while auditing these callsites, I've stumbled across the following
snippet (from kernel/futex.c):
*futex = newval;
sys_futex(WAKE, futex);
futex_wake(futex);
smp_mb(); (B)
if (waiters)
...
where B is actually (c.f., futex_get_mm()):
atomic_inc(...->mm_count);
smp_mb__after_atomic();
It seems worth mentioning the fact that, AFAICT, this sequence does not
necessarily provide ordering when plain accesses are involved: consider,
e.g., the following variant of the snippet:
A:*x = 1;
/*
* I've "ignored" the syscall, which should provide
* (at least) a compiler barrier...
*/
atomic_inc(u);
smp_mb__after_atomic();
B:r0 = *y;
On x86, AFAICT, the compiler can do this:
atomic_inc(u);
A:*x = 1;
smp_mb__after_atomic();
B:r0 = *y;
(the implementation of atomic_inc() contains no compiler barrier), then
the CPU can "reorder" A and B (smp_mb__after_atomic() being #defined to
a compiler barrier).
The mips implementation seems also affected by such "reorderings": I am
not familiar with this implementation but, AFAICT, it does not enforce
ordering from A to B in the following snippet:
A:*x = 1;
atomic_inc(u);
smp_mb__after_atomic();
B:WRITE_ONCE(*y, 1);
when CONFIG_WEAK_ORDERING=y, CONFIG_WEAK_REORDERING_BEYOND_LLSC=n.
Do these observations make sense to you? Thoughts?
Andrea