[PATCH] powerpc/32: fix again csum_partial_copy_generic()

From: Christophe Leroy
Date: Fri Aug 26 2016 - 10:45:41 EST


commit 7aef4136566b0 ("powerpc32: rewrite csum_partial_copy_generic()
based on copy_tofrom_user()") introduced a bug when destination
address is odd and len is lower than cacheline size.

In that case the resulting csum value doesn't have to be rotated one
byte because the cache-aligned copy part is skipped so no alignment
is performed.

Fixes: 7aef4136566b0 ("powerpc32: rewrite csum_partial_copy_generic()
based on copy_tofrom_user()")
Cc: stable@xxxxxxxxxxxxxxx

Reported-by: Alessio Igor Bogani <alessio.bogani@xxxxxxxxxx>
Signed-off-by: Christophe Leroy <christophe.leroy@xxxxxx>
Tested-by: Alessio Igor Bogani <alessio.bogani@xxxxxxxxxx>
---
arch/powerpc/lib/checksum_32.S | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/lib/checksum_32.S b/arch/powerpc/lib/checksum_32.S
index 0a57fe6..aa8214f 100644
--- a/arch/powerpc/lib/checksum_32.S
+++ b/arch/powerpc/lib/checksum_32.S
@@ -127,18 +127,19 @@ _GLOBAL(csum_partial_copy_generic)
stw r7,12(r1)
stw r8,8(r1)

- rlwinm r0,r4,3,0x8
- rlwnm r6,r6,r0,0,31 /* odd destination address: rotate one byte */
- cmplwi cr7,r0,0 /* is destination address even ? */
addic r12,r6,0
addi r6,r4,-4
neg r0,r4
addi r4,r3,-4
andi. r0,r0,CACHELINE_MASK /* # bytes to start of cache line */
+ crset 4*cr7+eq
beq 58f

cmplw 0,r5,r0 /* is this more than total to do? */
blt 63f /* if not much to do */
+ rlwinm r7,r6,3,0x8
+ rlwnm r12,r12,r7,0,31 /* odd destination address: rotate one byte */
+ cmplwi cr7,r7,0 /* is destination address even ? */
andi. r8,r0,3 /* get it word-aligned first */
mtctr r8
beq+ 61f
--
2.1.0