[PATCH 3.2 24/62] x86/uaccess/64: Handle the caching of 4-byte nocache copies properly in __copy_user_nocache()

From: Ben Hutchings
Date: Tue Mar 29 2016 - 16:07:13 EST

3.2.79-rc1 review patch. If anyone has any objections, please let me know.


From: Toshi Kani <toshi.kani@xxxxxxx>

commit a82eee7424525e34e98d821dd059ce14560a1e35 upstream.

Data corruption issues were observed in tests which initiated
a system crash/reset while accessing BTT devices. This problem
is reproducible.

The BTT driver calls pmem_rw_bytes() to update data in pmem
devices. This interface calls __copy_user_nocache(), which
uses non-temporal stores so that the stores to pmem are

__copy_user_nocache() uses non-temporal stores when a request
size is 8 bytes or larger (and is aligned by 8 bytes). The
BTT driver updates the BTT map table, which entry size is
4 bytes. Therefore, updates to the map table entries remain
cached, and are not written to pmem after a crash.

Change __copy_user_nocache() to use non-temporal store when
a request size is 4 bytes. The change extends the current
byte-copy path for a less-than-8-bytes request, and does not
add any overhead to the regular path.

Reported-and-tested-by: Micah Parrish <micah.parrish@xxxxxxx>
Reported-and-tested-by: Brian Boylston <brian.boylston@xxxxxxx>
Signed-off-by: Toshi Kani <toshi.kani@xxxxxxx>
Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
Cc: Andy Lutomirski <luto@xxxxxxxxxxxxxx>
Cc: Borislav Petkov <bp@xxxxxxxxx>
Cc: Borislav Petkov <bp@xxxxxxx>
Cc: Brian Gerst <brgerst@xxxxxxxxx>
Cc: Dan Williams <dan.j.williams@xxxxxxxxx>
Cc: Denys Vlasenko <dvlasenk@xxxxxxxxxx>
Cc: H. Peter Anvin <hpa@xxxxxxxxx>
Cc: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx>
Cc: Luis R. Rodriguez <mcgrof@xxxxxxxx>
Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
Cc: Ross Zwisler <ross.zwisler@xxxxxxxxxxxxxxx>
Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
Cc: Toshi Kani <toshi.kani@xxxxxx>
Cc: Vishal Verma <vishal.l.verma@xxxxxxxxx>
Cc: linux-nvdimm@xxxxxxxxxxxx
Link: http://lkml.kernel.org/r/1455225857-12039-3-git-send-email-toshi.kani@xxxxxxx
[ Small readability edits. ]
Signed-off-by: Ingo Molnar <mingo@xxxxxxxxxx>
[bwh: Backported to 3.2: aadjust filename, context]
Signed-off-by: Ben Hutchings <ben@xxxxxxxxxxxxxxx>
arch/x86/lib/copy_user_nocache_64.S | 36 ++++++++++++++++++++++++++++++++----
1 file changed, 32 insertions(+), 4 deletions(-)

--- a/arch/x86/lib/copy_user_nocache_64.S
+++ b/arch/x86/lib/copy_user_nocache_64.S
@@ -49,13 +49,14 @@
* Note: Cached memory copy is used when destination or size is not
* naturally aligned. That is:
* - Require 8-byte alignment when size is 8 bytes or larger.
+ * - Require 4-byte alignment when size is 4 bytes.

- /* If size is less than 8 bytes, go to byte copy */
+ /* If size is less than 8 bytes, go to 4-byte copy */
cmpl $8,%edx
- jb .L_1b_cache_copy_entry
+ jb .L_4b_nocache_copy_entry

/* If destination is not 8-byte aligned, "cache" copy to align it */
@@ -94,7 +95,7 @@ ENTRY(__copy_user_nocache)
movl %edx,%ecx
andl $7,%edx
shrl $3,%ecx
- jz .L_1b_cache_copy_entry /* jump if count is 0 */
+ jz .L_4b_nocache_copy_entry /* jump if count is 0 */

/* Perform 8-byte nocache loop-copy */
@@ -106,11 +107,33 @@ ENTRY(__copy_user_nocache)
jnz .L_8b_nocache_copy_loop

/* If no byte left, we're done */
+ andl %edx,%edx
+ jz .L_finish_copy
+ /* If destination is not 4-byte aligned, go to byte copy: */
+ movl %edi,%ecx
+ andl $3,%ecx
+ jnz .L_1b_cache_copy_entry
+ /* Set 4-byte copy count (1 or 0) and remainder */
+ movl %edx,%ecx
+ andl $3,%edx
+ shrl $2,%ecx
+ jz .L_1b_cache_copy_entry /* jump if count is 0 */
+ /* Perform 4-byte nocache copy: */
+30: movl (%rsi),%r8d
+31: movnti %r8d,(%rdi)
+ leaq 4(%rsi),%rsi
+ leaq 4(%rdi),%rdi
+ /* If no bytes left, we're done: */
andl %edx,%edx
jz .L_finish_copy

/* Perform byte "cache" loop-copy for the remainder */
movl %edx,%ecx
40: movb (%rsi),%al
@@ -134,6 +157,9 @@ ENTRY(__copy_user_nocache)
lea (%rdx,%rcx,8),%rdx
jmp .L_fixup_handle_tail
+ lea (%rdx,%rcx,4),%rdx
+ jmp .L_fixup_handle_tail
movl %ecx,%edx
@@ -159,6 +185,8 @@ ENTRY(__copy_user_nocache)
+ _ASM_EXTABLE(30b,.L_fixup_4b_copy)
+ _ASM_EXTABLE(31b,.L_fixup_4b_copy)