[PATCH 3/6] x86, memcpy_mcsafe: add write-protection-fault handling

From: Dan Williams
Date: Tue May 01 2018 - 16:55:29 EST


In preparation for using memcpy_mcsafe() to handle user copies it needs
to be to handle write-protection faults while writing user pages. Add
MMU-fault handlers alongside the machine-check exception handlers.

Note that the machine check fault exception handling makes assumptions
about source buffer alignment and poison alignment. In the write fault
case, given the destination buffer is arbitrarily aligned, it needs a
separate / additional fault handling approach. The mcsafe_handle_tail()
helper is reused. The @limit argument is set to @len since there is no
safety concern about retriggering an MMU fault, and this simplifies the
assembly.

Cc: <x86@xxxxxxxxxx>
Cc: Ingo Molnar <mingo@xxxxxxxxxx>
Cc: Borislav Petkov <bp@xxxxxxxxx>
Cc: Tony Luck <tony.luck@xxxxxxxxx>
Cc: Al Viro <viro@xxxxxxxxxxxxxxxxxx>
Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
Cc: Andy Lutomirski <luto@xxxxxxxxxxxxxx>
Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
Cc: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx>
Co-developed-by: Tony Luck <tony.luck@xxxxxxxxx>
Signed-off-by: Dan Williams <dan.j.williams@xxxxxxxxx>
---
arch/x86/lib/memcpy_64.S | 21 +++++++++++++++++++++
1 file changed, 21 insertions(+)

diff --git a/arch/x86/lib/memcpy_64.S b/arch/x86/lib/memcpy_64.S
index 97b772fcf62f..fc9c1f594c71 100644
--- a/arch/x86/lib/memcpy_64.S
+++ b/arch/x86/lib/memcpy_64.S
@@ -345,6 +345,16 @@ EXPORT_SYMBOL_GPL(memcpy_mcsafe_unrolled)
mov %ecx, %eax
ret

+.E_write_cache_X:
+ shll $6, %ecx
+ jmp .E_handle_tail
+.E_write_trailing_words:
+ shll $3, %ecx
+.E_handle_tail:
+ addl %edx, %ecx
+ movl %ecx, %edx
+ jmp mcsafe_handle_tail
+
.previous

_ASM_EXTABLE_FAULT(.L_read_leading_bytes, .E_leading_bytes)
@@ -358,4 +368,15 @@ EXPORT_SYMBOL_GPL(memcpy_mcsafe_unrolled)
_ASM_EXTABLE_FAULT(.L_cache_r7, .E_cache_7)
_ASM_EXTABLE_FAULT(.L_read_trailing_words, .E_trailing_words)
_ASM_EXTABLE_FAULT(.L_read_trailing_bytes, .E_trailing_bytes)
+ _ASM_EXTABLE(.L_write_leading_bytes, .E_leading_bytes)
+ _ASM_EXTABLE(.L_cache_w0, .E_write_cache_X)
+ _ASM_EXTABLE(.L_cache_w1, .E_write_cache_X)
+ _ASM_EXTABLE(.L_cache_w2, .E_write_cache_X)
+ _ASM_EXTABLE(.L_cache_w3, .E_write_cache_X)
+ _ASM_EXTABLE(.L_cache_w4, .E_write_cache_X)
+ _ASM_EXTABLE(.L_cache_w5, .E_write_cache_X)
+ _ASM_EXTABLE(.L_cache_w6, .E_write_cache_X)
+ _ASM_EXTABLE(.L_cache_w7, .E_write_cache_X)
+ _ASM_EXTABLE(.L_write_trailing_words, .E_write_trailing_words)
+ _ASM_EXTABLE(.L_write_trailing_bytes, .E_trailing_bytes)
#endif