It was reported that the riscv kernel hangs while executing the test
in [1].
Indeed, the test hangs when trying to write a buffer to a file. The
problem is that the riscv implementation of raw_copy_from_user() does not
return the number of bytes written when an exception happens and is fixed
up.
generic_perform_write() pre-faults the user pages and bails out if nothing
can be written, otherwise it will access the userspace buffer: here the
riscv implementation keeps returning it was not able to copy any byte
though the pre-faulting indicates otherwise. So generic_perform_write()
keeps retrying to access the user memory and ends up in an infinite
loop.
Note that before the commit mentioned in [1] that introduced this
regression, it worked because generic_perform_write() would bail out if
only one byte could not be written.
So fix this by returning the number of bytes effectively written in
__asm_copy_[to|from]_user() and __clear_user(), as it is expected.
[1] https://lore.kernel.org/linux-riscv/20230309151841.bomov6hq3ybyp42a@debian/
Fixes: ebcbd75e3962 ("riscv: Fix the bug in memory access fixup code")
Reported-by: Bo YU <tsu.yubo@xxxxxxxxx>
Closes: https://lore.kernel.org/linux-riscv/20230309151841.bomov6hq3ybyp42a@debian/#t
Reported-by: Aurelien Jarno <aurelien@xxxxxxxxxxx>
Closes: https://lore.kernel.org/linux-riscv/ZNOnCakhwIeue3yr@xxxxxxxxxxx/
Signed-off-by: Alexandre Ghiti <alexghiti@xxxxxxxxxxxx>
---
arch/riscv/lib/uaccess.S | 11 +++++++----
1 file changed, 7 insertions(+), 4 deletions(-)
diff --git a/arch/riscv/lib/uaccess.S b/arch/riscv/lib/uaccess.S
index ec486e5369d9..09b47ebacf2e 100644
--- a/arch/riscv/lib/uaccess.S
+++ b/arch/riscv/lib/uaccess.S
@@ -17,8 +17,11 @@ ENTRY(__asm_copy_from_user)
li t6, SR_SUM
csrs CSR_STATUS, t6
- /* Save for return value */
- mv t5, a2
+ /*
+ * Save the terminal address which will be used to compute the number
+ * of bytes copied in case of a fixup exception.
+ */
+ add t5, a0, a2
/*
* Register allocation for code below:
@@ -176,7 +179,7 @@ ENTRY(__asm_copy_from_user)
10:
/* Disable access to user memory */
csrc CSR_STATUS, t6
- mv a0, t5
+ sub a0, t5, a0
ret
ENDPROC(__asm_copy_to_user)
ENDPROC(__asm_copy_from_user)
@@ -228,7 +231,7 @@ ENTRY(__clear_user)
11:
/* Disable access to user memory */
csrc CSR_STATUS, t6
- mv a0, a1
+ sub a0, a3, a0
ret
ENDPROC(__clear_user)
EXPORT_SYMBOL(__clear_user)