Re: [PATCH] locking/refcounts, x86/asm: Use unique .text section for refcount exceptions
From: Ard Biesheuvel
Date: Sat Sep 02 2017 - 04:12:35 EST
On 2 September 2017 at 03:58, Kees Cook <keescook@xxxxxxxxxxxx> wrote:
> On Fri, Sep 1, 2017 at 2:43 PM, Ard Biesheuvel
> <ard.biesheuvel@xxxxxxxxxx> wrote:
>> On 1 September 2017 at 21:22, Kees Cook <keescook@xxxxxxxxxxxx> wrote:
>>> Using .text.unlikely for refcount exceptions isn't safe because gcc may
>>> move entire functions into .text.unlikely (e.g. in6_dev_get()), which
>>> would cause any uses of a protected refcount_t function to stay inline
>>> with the function, triggering the protection unconditionally:
>>>
>>> .section .text.unlikely,"ax",@progbits
>>> .type in6_dev_get, @function
>>> in6_dev_getx:
>>> .LFB4673:
>>> .loc 2 4128 0
>>> .cfi_startproc
>>> ...
>>> lock; incl 480(%rbx)
>>> js 111f
>>> .pushsection .text.unlikely
>>> 111: lea 480(%rbx), %rcx
>>> 112: .byte 0x0f, 0xff
>>> .popsection
>>> 113:
>>>
>>> This creates a unique .text section and adds an additional test to the
>>> exception handler to WARN in the case of having none of OF, SF, nor ZF
>>> set so we can see things like this more easily in the future.
>>>
>>> Reported-by: Mike Galbraith <efault@xxxxxx>
>>> Fixes: 7a46ec0e2f48 ("locking/refcounts, x86/asm: Implement fast refcount overflow protection")
>>> Signed-off-by: Kees Cook <keescook@xxxxxxxxxxxx>
>>> ---
>>> arch/x86/Kconfig | 2 +-
>>> arch/x86/include/asm/refcount.h | 2 +-
>>> arch/x86/mm/extable.c | 7 ++++++-
>>> include/asm-generic/vmlinux.lds.h | 1 +
>>> 4 files changed, 9 insertions(+), 3 deletions(-)
>>>
>>> diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
>>> index eaa8ff41f424..c6acdcdb3fc6 100644
>>> --- a/arch/x86/Kconfig
>>> +++ b/arch/x86/Kconfig
>>> @@ -56,7 +56,7 @@ config X86
>>> select ARCH_HAS_MMIO_FLUSH
>>> select ARCH_HAS_PMEM_API if X86_64
>>> # Causing hangs/crashes, see the commit that added this change for details.
>>> - select ARCH_HAS_REFCOUNT if BROKEN
>>> + select ARCH_HAS_REFCOUNT
>>> select ARCH_HAS_UACCESS_FLUSHCACHE if X86_64
>>> select ARCH_HAS_SET_MEMORY
>>> select ARCH_HAS_SG_CHAIN
>>> diff --git a/arch/x86/include/asm/refcount.h b/arch/x86/include/asm/refcount.h
>>> index ff871210b9f2..4e44250e7d0d 100644
>>> --- a/arch/x86/include/asm/refcount.h
>>> +++ b/arch/x86/include/asm/refcount.h
>>> @@ -15,7 +15,7 @@
>>> * back to the regular execution flow in .text.
>>> */
>>> #define _REFCOUNT_EXCEPTION \
>>> - ".pushsection .text.unlikely\n" \
>>> + ".pushsection .text..refcount\n" \
>>
>> You could also use a .subsection here: those are always emitted out of
>> line, but into the same section.
>
> I considered it (especially after looking to see how you handled it in
> the arm64 port) but decided against it in favor of collecting them all
> at the end with .text.unlikely.
>
OK. On arm64, I needed this because the out of line sequence refers
back to the inline code, which resulted in section mismatches if the
refcount API is used from __init code. If that problem does not exist
on x86, I guess it doesn't matter.