Re: [PATCH] LoongArch: Fixup cmpxchg sematic for memory barrier

From: Will Deacon
Date: Tue Aug 01 2023 - 04:32:20 EST


On Tue, Aug 01, 2023 at 10:29:31AM +0800, WANG Rui wrote:
> On Tue, Aug 1, 2023 at 9:16 AM <guoren@xxxxxxxxxx> wrote:
> > diff --git a/arch/loongarch/include/asm/cmpxchg.h b/arch/loongarch/include/asm/cmpxchg.h
> > index 979fde61bba8..6a05b92814b6 100644
> > --- a/arch/loongarch/include/asm/cmpxchg.h
> > +++ b/arch/loongarch/include/asm/cmpxchg.h
> > @@ -102,8 +102,8 @@ __arch_xchg(volatile void *ptr, unsigned long x, int size)
> > " move $t0, %z4 \n" \
> > " " st " $t0, %1 \n" \
> > " beqz $t0, 1b \n" \
> > - "2: \n" \
> > __WEAK_LLSC_MB \
> > + "2: \n" \
>
> Thanks for the patch.
>
> This would look pretty good if it weren't for the special memory
> barrier semantics of the LoongArch's LL and SC instructions.
>
> The LL/SC memory barrier behavior of LoongArch:
>
> * LL: <memory-barrier> + <load-exclusive>
> * SC: <store-conditional> + <memory-barrier>
>
> and the LoongArch's weak memory model allows load/load reorder for the
> same address.

Hmm, somehow this one passed me by, but I think that puts you in the naughty
corner with Itanium. It probably also means your READ_ONCE() is broken,
unless the compiler emits barriers for volatile reads (like ia64)?

Will