Re: Hang on large copy_from_user with PREEMPT_NONE

From: Linus Torvalds
Date: Mon Apr 06 2015 - 13:26:24 EST


On Sun, Apr 5, 2015 at 8:59 PM, Sasha Levin <sasha.levin@xxxxxxxxxx> wrote:
>
> This is the result of getting copy_user_handle_tail to zero out a large block of
> kernel memory very inefficiently:

Ugh.

Normally we should be able to just do

if (zerorest)
memset(to, 0, len);

and be done with it.

The only reason we don't do that actually looks like a buglet in
'copy_in_user()', which can have a source fault but should *not*
necessarily try to clear the rest of the destination buffer. But it
uses the shared "copy_user_generic()" logic, so it doesn't even
realize that.

I call it a "buglet", because there's not necessarily anything
horribly wrong with clearing the tail, it's just completely wasted
work. And it makes the "oops, source is bad" case unnecessarily
horribly slow.

In fact, the whole "zerorest" thing is garbage, I think. The
copy_user_generic() code seems to always set it to just 'len', and
it's because it doesn't even know or care about the actual direction.

The *real* test should just be "is the destination a kernel space
buffer" (we have done the "access_ok()" things independently). And
that test we can do without any 'zerorest' parameter.

So the attached (untested) patch should

(a) remove the pointless 'zerorest' parameter

(b) fix the 'copy_in_user()' buglet

(c) make the kernel destination case be much more efficient with just
a simple 'memset()'

Hmm? Comments? Sasha, do you have the initial random number state to
recreate this easily?

Linus
arch/x86/include/asm/uaccess_64.h | 2 +-
arch/x86/lib/usercopy_64.c | 15 +++++++--------
2 files changed, 8 insertions(+), 9 deletions(-)

diff --git a/arch/x86/include/asm/uaccess_64.h b/arch/x86/include/asm/uaccess_64.h
index 12a26b979bf1..f2f9b39b274a 100644
--- a/arch/x86/include/asm/uaccess_64.h
+++ b/arch/x86/include/asm/uaccess_64.h
@@ -231,6 +231,6 @@ __copy_from_user_inatomic_nocache(void *dst, const void __user *src,
}

unsigned long
-copy_user_handle_tail(char *to, char *from, unsigned len, unsigned zerorest);
+copy_user_handle_tail(char *to, char *from, unsigned len);

#endif /* _ASM_X86_UACCESS_64_H */
diff --git a/arch/x86/lib/usercopy_64.c b/arch/x86/lib/usercopy_64.c
index c905e89e19fe..1f33b3d1fd68 100644
--- a/arch/x86/lib/usercopy_64.c
+++ b/arch/x86/lib/usercopy_64.c
@@ -69,21 +69,20 @@ EXPORT_SYMBOL(copy_in_user);
* it is not necessary to optimize tail handling.
*/
__visible unsigned long
-copy_user_handle_tail(char *to, char *from, unsigned len, unsigned zerorest)
+copy_user_handle_tail(char *to, char *from, unsigned len)
{
- char c;
- unsigned zero_len;
-
for (; len; --len, to++) {
+ char c;
+
if (__get_user_nocheck(c, from++, sizeof(char)))
break;
if (__put_user_nocheck(c, to, sizeof(char)))
break;
}
-
- for (c = 0, zero_len = len; zerorest && zero_len; --zero_len)
- if (__put_user_nocheck(c, to++, sizeof(char)))
- break;
clac();
+
+ /* If the destination is a kernel buffer, we always clear the end */
+ if ((unsigned long)to >= TASK_SIZE_MAX)
+ memset(to, 0, len);
return len;
}