Re: [PATCH 2/4] arm64: implement support for static call trampolines

From: Mark Rutland
Date: Tue Oct 26 2021 - 06:37:20 EST


On Mon, Oct 25, 2021 at 05:10:24PM +0200, Ard Biesheuvel wrote:
> On Mon, 25 Oct 2021 at 17:05, Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote:
> >
> > On Mon, Oct 25, 2021 at 04:55:17PM +0200, Ard Biesheuvel wrote:
> > > On Mon, 25 Oct 2021 at 16:47, Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote:
> >
> > > > Perhaps a little something like so.. Shaves 2 instructions off each
> > > > trampoline.
> > > >
> > > > --- a/arch/arm64/include/asm/static_call.h
> > > > +++ b/arch/arm64/include/asm/static_call.h
> > > > @@ -11,9 +11,7 @@
> > > > " hint 34 /* BTI C */ \n" \
> > > > insn " \n" \
> > > > " ldr x16, 0b \n" \
> > > > - " cbz x16, 1f \n" \
> > > > " br x16 \n" \
> > > > - "1: ret \n" \
> > > > " .popsection \n")
> > > >
> > > > #define ARCH_DEFINE_STATIC_CALL_TRAMP(name, func) \
> > > > --- a/arch/arm64/kernel/patching.c
> > > > +++ b/arch/arm64/kernel/patching.c
> > > > @@ -90,6 +90,11 @@ int __kprobes aarch64_insn_write(void *a
> > > > return __aarch64_insn_write(addr, &i, AARCH64_INSN_SIZE);
> > > > }
> > > >
> > > > +asm("__static_call_ret: \n"
> > > > + " ret \n")
> > > > +
> > >
> > > This breaks BTI as it lacks the landing pad, and it will be called indirectly.
> >
> > Argh!
> >
> > > > +extern void __static_call_ret(void);
> > > > +
> > >
> > > Better to have an ordinary C function here (with consistent linkage),
> > > but we need to take the address in a way that works with Clang CFI.
> >
> > There is that.
> >
> > > As the two additional instructions are on an ice cold path anyway, I'm
> > > not sure this is an obvious improvement tbh.
> >
> > For me it's both simpler -- by virtue of being more consistent, and
> > smaller. So double win :-)
> >
> > That is; you're already relying on the literal being unconditionally
> > updated for the normal B foo -> NOP path, and having the RET -> NOP path
> > be handled differently is just confusing.
> >
> > At least, that's how I'm seeing it today...
>
> Fair enough. I don't have a strong opinion either way, so I'll let
> some other arm64 folks chime in as well.

My preference overall is to keep the trampoline self-contained, and I'd
prefer to keep the RET inline in the trampoline rather than trying to
factor it out so that all the control-flow is clearly in one place.

So I'd prefer that we have the sequence as-is:

| 0: .quad 0x0
| bti c
| < insn >
| ldr x16, 0b
| cbz x16, 1f
| br x16
| 1: ret

If we knew these were only called with IRQs enabled (and so we can take
an IPI to generate a context synchronization event), we could patch
<insn> to a RET and point the literal back at the BTI, e.g.

| 0: .quad 0x0
| bti c
| < insn >
| ldr x16, 0b
| br x16

... but I'm pretty sure there are CPUs that will never re-fetch <insn>
in that case, and will get stuck in an infinite loop.

Thanks,
Mark.