On 18 Nov 2024, at 23:01, Cyril Bur <cyrilbur@xxxxxxxxxxxxxxx> wrote:
From: Jisheng Zhang <jszhang@xxxxxxxxxx>
I believe the output constraints "=m" is not necessary, because
the instruction itself is "write", we don't need the compiler
to "write" for us. So tell compiler we read from memory instead
of writing.
That’s not what =m does. =m tells the compiler “this assembly will be
writing to this memory location, and needs the address of the location
for that operand”. If you move it from an output to an input then you
get the same assembly generated, but the compiler believes you are
*reading* from that memory, not *writing* to it, and so does not
believe the memory may be clobbered.
Now, it may well be that, by virtue of this being userspace memory that
is only ever accessed via assembly, and any inline assembly being
marked volatile, this in effect doesn’t matter. But it is still
technically wrong to model it that way, and you cannot use the
justification you are using, because it is false, and demonstrates a
lack of understanding for how inline assembly works. It has to be
correctly justified.
(I do note that neither x86 nor arm64 seems to model this as an output)
Jess
Signed-off-by: Jisheng Zhang <jszhang@xxxxxxxxxx>
Signed-off-by: Cyril Bur <cyrilbur@xxxxxxxxxxxxxxx>
---
arch/riscv/include/asm/uaccess.h | 18 +++++++++---------
1 file changed, 9 insertions(+), 9 deletions(-)
diff --git a/arch/riscv/include/asm/uaccess.h b/arch/riscv/include/asm/uaccess.h
index 09d4ca37522c..84b084e388a7 100644
--- a/arch/riscv/include/asm/uaccess.h
+++ b/arch/riscv/include/asm/uaccess.h
@@ -186,11 +186,11 @@ do { \
__typeof__(*(ptr)) __x = x; \
__asm__ __volatile__ ( \
"1:\n" \
- " " insn " %z2, %1\n" \
+ " " insn " %z1, %2\n" \
"2:\n" \
_ASM_EXTABLE_UACCESS_ERR(1b, 2b, %0) \
- : "+r" (err), "=m" (*(ptr)) \
- : "rJ" (__x)); \
+ : "+r" (err) \
+ : "rJ" (__x), "m"(*(ptr))); \
} while (0)
#ifdef CONFIG_64BIT
@@ -203,16 +203,16 @@ do { \
u64 __x = (__typeof__((x)-(x)))(x); \
__asm__ __volatile__ ( \
"1:\n" \
- " sw %z3, %1\n" \
+ " sw %z1, %3\n" \
"2:\n" \
- " sw %z4, %2\n" \
+ " sw %z2, %4\n" \
"3:\n" \
_ASM_EXTABLE_UACCESS_ERR(1b, 3b, %0) \
_ASM_EXTABLE_UACCESS_ERR(2b, 3b, %0) \
- : "+r" (err), \
- "=m" (__ptr[__LSW]), \
- "=m" (__ptr[__MSW]) \
- : "rJ" (__x), "rJ" (__x >> 32)); \
+ : "+r" (err) \
+ : "rJ" (__x), "rJ" (__x >> 32), \
+ "m" (__ptr[__LSW]), \
+ "m" (__ptr[__MSW])); \
} while (0)
#endif /* CONFIG_64BIT */
--
2.34.1
_______________________________________________
linux-riscv mailing list
linux-riscv@xxxxxxxxxxxxxxxxxxx
http://lists.infradead.org/mailman/listinfo/linux-riscv