Re: [PATCH 3/3] asm-generic, x86: Add bitops instrumentation for KASAN

From: Mark Rutland
Date: Wed May 29 2019 - 09:29:34 EST


On Wed, May 29, 2019 at 12:57:15PM +0200, Dmitry Vyukov wrote:
> On Wed, May 29, 2019 at 12:30 PM Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote:
> >
> > On Wed, May 29, 2019 at 12:16:31PM +0200, Marco Elver wrote:
> > > On Wed, 29 May 2019 at 12:01, Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote:
> > > >
> > > > On Wed, May 29, 2019 at 11:20:17AM +0200, Marco Elver wrote:
> > > > > For the default, we decided to err on the conservative side for now,
> > > > > since it seems that e.g. x86 operates only on the byte the bit is on.
> > > >
> > > > This is not correct, see for instance set_bit():
> > > >
> > > > static __always_inline void
> > > > set_bit(long nr, volatile unsigned long *addr)
> > > > {
> > > > if (IS_IMMEDIATE(nr)) {
> > > > asm volatile(LOCK_PREFIX "orb %1,%0"
> > > > : CONST_MASK_ADDR(nr, addr)
> > > > : "iq" ((u8)CONST_MASK(nr))
> > > > : "memory");
> > > > } else {
> > > > asm volatile(LOCK_PREFIX __ASM_SIZE(bts) " %1,%0"
> > > > : : RLONG_ADDR(addr), "Ir" (nr) : "memory");
> > > > }
> > > > }
> > > >
> > > > That results in:
> > > >
> > > > LOCK BTSQ nr, (addr)
> > > >
> > > > when @nr is not an immediate.
> > >
> > > Thanks for the clarification. Given that arm64 already instruments
> > > bitops access to whole words, and x86 may also do so for some bitops,
> > > it seems fine to instrument word-sized accesses by default. Is that
> > > reasonable?
> >
> > Eminently -- the API is defined such; for bonus points KASAN should also
> > do alignment checks on atomic ops. Future hardware will #AC on unaligned
> > [*] LOCK prefix instructions.
> >
> > (*) not entirely accurate, it will only trap when crossing a line.
> > https://lkml.kernel.org/r/1556134382-58814-1-git-send-email-fenghua.yu@xxxxxxxxx
>
> Interesting. Does an address passed to bitops also should be aligned,
> or alignment is supposed to be handled by bitops themselves?
>
> This probably should be done as a separate config as not related to
> KASAN per se. But obviously via the same
> {atomicops,bitops}-instrumented.h hooks which will make it
> significantly easier.

Makes sense to me -- that should be easy to hack into gen_param_check()
in gen-atomic-instrumented.sh, something like:

----
diff --git a/scripts/atomic/gen-atomic-instrumented.sh b/scripts/atomic/gen-atomic-instrumented.sh
index e09812372b17..2f6b8f521e57 100755
--- a/scripts/atomic/gen-atomic-instrumented.sh
+++ b/scripts/atomic/gen-atomic-instrumented.sh
@@ -21,6 +21,13 @@ gen_param_check()
[ ${type#c} != ${type} ] && rw="read"

printf "\tkasan_check_${rw}(${name}, sizeof(*${name}));\n"
+
+ [ "${type#c}" = "v" ] || return
+
+cat <<EOF
+ if (IS_ENABLED(CONFIG_PETERZ))
+ WARN_ON(!IS_ALIGNED(${name}, sizeof(*${name})));
+EOF
}

#gen_param_check(arg...)
----

On arm64 our atomic instructions always perform an alignment check, so
we'd only miss if an atomic op bailed out after a plain READ_ONCE() of
an unaligned atomic variable.

Thanks,
Mark.