Re: [PATCH] riscv/barrier: Define __smp_{store_release,load_acquire}

From: Daniel Lustig
Date: Tue Feb 27 2018 - 17:20:45 EST


On 2/27/2018 10:21 AM, Palmer Dabbelt wrote:
> On Mon, 26 Feb 2018 18:24:11 PST (-0800), parri.andrea@xxxxxxxxx wrote:
>> Introduce __smp_{store_release,load_acquire}, and rely on the generic
>> definitions for smp_{store_release,load_acquire}. This avoids the use
>> of full ("rw,rw") fences on SMP.
>>
>> Signed-off-by: Andrea Parri <parri.andrea@xxxxxxxxx>
>> ---
>> Âarch/riscv/include/asm/barrier.h | 15 +++++++++++++++
>> Â1 file changed, 15 insertions(+)
>>
>> diff --git a/arch/riscv/include/asm/barrier.h b/arch/riscv/include/asm/barrier.h
>> index 5510366d169ae..d4628e4b3a5ea 100644
>> --- a/arch/riscv/include/asm/barrier.h
>> +++ b/arch/riscv/include/asm/barrier.h
>> @@ -38,6 +38,21 @@
>> Â#define __smp_rmb()ÂÂÂ RISCV_FENCE(r,r)
>> Â#define __smp_wmb()ÂÂÂ RISCV_FENCE(w,w)
>>
>> +#define __smp_store_release(p, v)ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ \
>> +do {ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ \
>> +ÂÂÂ compiletime_assert_atomic_type(*p);ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ \
>> +ÂÂÂ RISCV_FENCE(rw,w);ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ \
>> +ÂÂÂ WRITE_ONCE(*p, v);ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ \
>> +} while (0)
>> +
>> +#define __smp_load_acquire(p)ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ \
>> +({ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ \
>> +ÂÂÂ typeof(*p) ___p1 = READ_ONCE(*p);ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ \
>> +ÂÂÂ compiletime_assert_atomic_type(*p);ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ \
>> +ÂÂÂ RISCV_FENCE(r,rw);ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ \
>> +ÂÂÂ ___p1;ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ \
>> +})
>> +
>> Â/*
>> Â * This is a very specific barrier: it's currently only used in two places in
>>  * the kernel, both in the scheduler. See include/linux/spinlock.h for the two
>
> I'm adding Daniel just in case I misunderstood what's going on here,
> but these look good to me. As this is a non-trivial memory model
> change I'm going to let it bake in linux-next for a bit just so it
> gets some visibility.

Looks good to me too. In particular, it also covers the
Write->release(p)->acquire(p)->Write ordering that we were debating
in the broader LKMM thread, which is good.

Dan

>
> Thanks