Re: [PATCH] LoongArch: Fixup cmpxchg sematic for memory barrier

From: Guo Ren
Date: Tue Aug 01 2023 - 05:10:47 EST


On Tue, Aug 1, 2023 at 5:02 PM Guo Ren <guoren@xxxxxxxxxx> wrote:
>
> On Tue, Aug 1, 2023 at 10:29 AM WANG Rui <wangrui@xxxxxxxxxxx> wrote:
> >
> > Hello,
> >
> > On Tue, Aug 1, 2023 at 9:16 AM <guoren@xxxxxxxxxx> wrote:
> > > diff --git a/arch/loongarch/include/asm/cmpxchg.h b/arch/loongarch/include/asm/cmpxchg.h
> > > index 979fde61bba8..6a05b92814b6 100644
> > > --- a/arch/loongarch/include/asm/cmpxchg.h
> > > +++ b/arch/loongarch/include/asm/cmpxchg.h
> > > @@ -102,8 +102,8 @@ __arch_xchg(volatile void *ptr, unsigned long x, int size)
> > > " move $t0, %z4 \n" \
> > > " " st " $t0, %1 \n" \
> > > " beqz $t0, 1b \n" \
> > > - "2: \n" \
> > > __WEAK_LLSC_MB \
> > > + "2: \n" \
> >
> > Thanks for the patch.
> >
> > This would look pretty good if it weren't for the special memory
> > barrier semantics of the LoongArch's LL and SC instructions.
> >
> > The LL/SC memory barrier behavior of LoongArch:
> >
> > * LL: <memory-barrier> + <load-exclusive>
> > * SC: <store-conditional> + <memory-barrier>
> >
> > and the LoongArch's weak memory model allows load/load reorder for the
> > same address.
> The CoRR problem would cause wider problems than this.For this case,
> do you mean your LL -> LL would be reordered?
>
> CPU 0
> CPU1
> LL(2) (set ex-monitor)
>
> store (break the ex-monitor)
> LL(1) (reordered instruction set ex-monitor
> SC(3) (successes ?)
Sorry for the mail client reformat, I mean:

CPU0 LL(2) (set ex-monitor)
CPU1 STORE (break the ex-monitor)
CPU0 LL(1) (reordered instruction set ex-monitor
CPU0 SC(3) (success?)

>
> >
> > So, the __WEAK_LLSC_MB[1] is used to prevent load/load reorder and no
> > explicit barrier instruction is required after SC.
> >
> > [1] https://lore.kernel.org/loongarch/20230516124536.535343-1-chenhuacai@xxxxxxxxxxx/
> >
> > Regards,
> > --
> > WANG Rui
> >
>
>
> --
> Best Regards
> Guo Ren



--
Best Regards
Guo Ren