Re: [PATCH] lib/strscpy: remove word-at-a-time optimization.
From: Dmitry Vyukov
Date: Tue Jan 30 2018 - 04:12:43 EST
On Thu, Jan 25, 2018 at 8:13 PM, Andrey Ryabinin
<aryabinin@xxxxxxxxxxxxx> wrote:
> On 01/25/2018 08:55 PM, Linus Torvalds wrote:
>> On Thu, Jan 25, 2018 at 12:32 AM, Dmitry Vyukov <dvyukov@xxxxxxxxxx> wrote:
>>> On Wed, Jan 24, 2018 at 6:52 PM, Linus Torvalds
>>> <torvalds@xxxxxxxxxxxxxxxxxxxx> wrote:
>>>>
>>>> So I'd *much* rather have some way to tell KASAN that word-at-a-time
>>>> is going on. Because that approach definitely makes a difference in
>>>> other places.
>>>
>>> The other option was to use READ_ONCE_NOCHECK().
>>
>> How about just using the same accessor that we do for the dcache case.
>> That gives a reasonable example of the whole word-at-a-time model, and
>> should be good.
>>
>
> If we also instrument load_unaligned_zeropad() with kasan_check_read(addr, 1),
> than it should be fine. We don't want completely unchecked read of a source string.
>
> But I also would like to revert df4c0e36f1b1 ("fs: dcache: manually unpoison dname after allocation to shut up kasan's reports")
> So I was going to send something like the hunk bellow (split in several patches).
>
> Or we could just use instrumented load_unalingned_zeropad() everywhere, but it seems wrong
> to use it to load *cs only to shut up KASAN.
>
>
> ---
> fs/dcache.c | 2 +-
> include/linux/compiler.h | 11 +++++++++++
> lib/string.c | 2 +-
> 3 files changed, 13 insertions(+), 2 deletions(-)
>
> diff --git a/fs/dcache.c b/fs/dcache.c
> index 5c7df1df81ff..6aa7be55a96d 100644
> --- a/fs/dcache.c
> +++ b/fs/dcache.c
> @@ -195,7 +195,7 @@ static inline int dentry_string_cmp(const unsigned char *cs, const unsigned char
> unsigned long a,b,mask;
>
> for (;;) {
> - a = *(unsigned long *)cs;
> + a = READ_PARTIAL_CHECK(*(unsigned long *)cs);
> b = load_unaligned_zeropad(ct);
> if (tcount < sizeof(unsigned long))
> break;
> diff --git a/include/linux/compiler.h b/include/linux/compiler.h
> index 52e611ab9a6c..85b63c2e196e 100644
> --- a/include/linux/compiler.h
> +++ b/include/linux/compiler.h
> @@ -240,6 +240,7 @@ static __always_inline void __write_once_size(volatile void *p, void *res, int s
> * required ordering.
> */
> #include <asm/barrier.h>
> +#include <linux/kasan-checks.h>
>
> #define __READ_ONCE(x, check) \
> ({ \
> @@ -259,6 +260,16 @@ static __always_inline void __write_once_size(volatile void *p, void *res, int s
> */
> #define READ_ONCE_NOCHECK(x) __READ_ONCE(x, 0)
>
> +#ifdef CONFIG_KASAN
> +#define READ_PARTIAL_CHECK(x) \
> +({ \
> + kasan_check_read(&(x), 1); \
> + READ_ONCE_NOCHECK(x); \
> +})
> +#else
> +#define READ_PARTIAL_CHECK(x) (x)
> +#endif
> +
> #define WRITE_ONCE(x, val) \
> ({ \
> union { typeof(x) __val; char __c[1]; } __u = \
> diff --git a/lib/string.c b/lib/string.c
> index 64a9e33f1daa..2396856e4c56 100644
> --- a/lib/string.c
> +++ b/lib/string.c
> @@ -203,7 +203,7 @@ ssize_t strscpy(char *dest, const char *src, size_t count)
> while (max >= sizeof(unsigned long)) {
> unsigned long c, data;
>
> - c = *(unsigned long *)(src+res);
> + c = READ_PARTIAL_CHECK(*(unsigned long *)(src+res));
> if (has_zero(c, &data, &constants)) {
> data = prep_zero_mask(c, data, &constants);
> data = create_zero_mask(data);
Looks good to me a general way to support word-at-a-time pattern.
This will also get rid of this in fs/dcache.c:
if (IS_ENABLED(CONFIG_DCACHE_WORD_ACCESS))
kasan_unpoison_shadow(dname,
round_up(name->len + 1, sizeof(unsigned long)));