Re: [RFC PATCH 21/21] x86: Add support for CONFIG_CFI_CLANG

From: Peter Zijlstra
Date: Sat Apr 30 2022 - 05:24:50 EST


On Fri, Apr 29, 2022 at 01:36:44PM -0700, Sami Tolvanen wrote:
> Add CONFIG_CFI_CLANG error handling and allow the config to be selected
> on x86_64.

Might be useful to have an example output of all ths somewhere, because
unless I go build my own clang again, I can't tell from these patches
what actual codegen looks like.

Going from the below, I seem to be able to reverse engineer some of it:

.long \signature
int3
int3
my_func:
ENDBR
...
ret

And then the callsites look like (clang *always* uses r11, right?):


cmpl \signature, -6(%r11)
je 1f
ud2
1:
call __x86_indirect_thunk_r11



> Signed-off-by: Sami Tolvanen <samitolvanen@xxxxxxxxxx>
> ---
> arch/x86/Kconfig | 1 +
> arch/x86/include/asm/linkage.h | 7 ++++++
> arch/x86/kernel/traps.c | 39 +++++++++++++++++++++++++++++++++-
> 3 files changed, 46 insertions(+), 1 deletion(-)
>
> diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
> index b0142e01002e..01db5c5c4dde 100644
> --- a/arch/x86/Kconfig
> +++ b/arch/x86/Kconfig
> @@ -108,6 +108,7 @@ config X86
> select ARCH_SUPPORTS_PAGE_TABLE_CHECK if X86_64
> select ARCH_SUPPORTS_NUMA_BALANCING if X86_64
> select ARCH_SUPPORTS_KMAP_LOCAL_FORCE_MAP if NR_CPUS <= 4096
> + select ARCH_SUPPORTS_CFI_CLANG if X86_64
> select ARCH_SUPPORTS_LTO_CLANG
> select ARCH_SUPPORTS_LTO_CLANG_THIN
> select ARCH_USE_BUILTIN_BSWAP
> diff --git a/arch/x86/include/asm/linkage.h b/arch/x86/include/asm/linkage.h
> index 85865f1645bd..d20acf5ebae3 100644
> --- a/arch/x86/include/asm/linkage.h
> +++ b/arch/x86/include/asm/linkage.h
> @@ -25,6 +25,13 @@
> #define RET ret
> #endif
>
> +#ifdef CONFIG_CFI_CLANG
> +#define __CFI_TYPE(name) \
> + .fill 10, 1, 0x90 ASM_NL \
> + .4byte __kcfi_typeid_##name ASM_NL \
> + .fill 2, 1, 0xcc
> +#endif
> +
> #else /* __ASSEMBLY__ */
>
> #ifdef CONFIG_SLS
> diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
> index 1563fb995005..b9e46e6ed83b 100644
> --- a/arch/x86/kernel/traps.c
> +++ b/arch/x86/kernel/traps.c
> @@ -40,6 +40,7 @@
> #include <linux/hardirq.h>
> #include <linux/atomic.h>
> #include <linux/ioasid.h>
> +#include <linux/cfi.h>
>
> #include <asm/stacktrace.h>
> #include <asm/processor.h>
> @@ -295,6 +296,41 @@ static inline void handle_invalid_op(struct pt_regs *regs)
> ILL_ILLOPN, error_get_trap_addr(regs));
> }
>
> +#ifdef CONFIG_CFI_CLANG
> +void *arch_get_cfi_target(unsigned long addr, struct pt_regs *regs)
> +{
> + char buffer[MAX_INSN_SIZE];
> + int offset;
> + struct insn insn;
> + unsigned long *target;

Reverse xmas please..

> +
> + /*
> + * The expected CFI check instruction sequence:
> + *   cmpl    <id>, -6(%reg) ; 7 bytes
> + * je .Ltmp1 ; 2 bytes
> + * ud2 ; <- addr
> + * .Ltmp1:
> + *
> + * Therefore, the target address is in a register that we can
> + * decode from the cmpl instruction.
> + */
> + if (copy_from_kernel_nofault(buffer, (void *)addr - 9, MAX_INSN_SIZE))
> + return NULL;
> + if (insn_decode(&insn, buffer, MAX_INSN_SIZE, INSN_MODE_64))
> + return NULL;

insn_decode_kernel()

> + if (insn.opcode.value != 0x81)
> + return NULL;

That's not sufficient to uniquely identify cmp, you also need to look at
the modrm to find r==7 I think.

> +
> + offset = insn_get_modrm_rm_off(&insn, regs);
> + if (offset < 0)
> + return NULL;
> +
> + target = (void *)regs + offset;
> +
> + return (void *)*target;
> +}
> +#endif
> +
> static noinstr bool handle_bug(struct pt_regs *regs)
> {
> bool handled = false;
> @@ -312,7 +348,8 @@ static noinstr bool handle_bug(struct pt_regs *regs)
> */
> if (regs->flags & X86_EFLAGS_IF)
> raw_local_irq_enable();
> - if (report_bug(regs->ip, regs) == BUG_TRAP_TYPE_WARN) {
> + if (report_bug(regs->ip, regs) == BUG_TRAP_TYPE_WARN ||
> + report_cfi(regs->ip, regs) == BUG_TRAP_TYPE_WARN) {

This way you'll first get a BUG splat and then tack on the CFI thing.
Seems a bit daft to have two splats.

> regs->ip += LEN_UD2;
> handled = true;
> }
> --
> 2.36.0.464.gb9c8b46e94-goog
>