Re: [PATCH RFC bpf-next 2/7] x86/ftrace: implement DYNAMIC_FTRACE_WITH_JMP

From: Steven Rostedt

Date: Fri Nov 14 2025 - 11:39:12 EST


On Fri, 14 Nov 2025 17:24:45 +0800
Menglong Dong <menglong8.dong@xxxxxxxxx> wrote:

> --- a/arch/x86/kernel/ftrace_64.S
> +++ b/arch/x86/kernel/ftrace_64.S
> @@ -285,8 +285,18 @@ SYM_INNER_LABEL(ftrace_regs_caller_end, SYM_L_GLOBAL)
> ANNOTATE_NOENDBR
> RET
>
> +1:
> + testb $1, %al
> + jz 2f
> + andq $0xfffffffffffffffe, %rax
> + movq %rax, MCOUNT_REG_SIZE+8(%rsp)
> + restore_mcount_regs
> + /* Restore flags */
> + popfq
> + RET
> +
> /* Swap the flags with orig_rax */
> -1: movq MCOUNT_REG_SIZE(%rsp), %rdi
> +2: movq MCOUNT_REG_SIZE(%rsp), %rdi
> movq %rdi, MCOUNT_REG_SIZE-8(%rsp)
> movq %rax, MCOUNT_REG_SIZE(%rsp)
>

So in this case we have:

original_caller:
call foo -> foo:
call fentry -> fentry:
[do ftrace callbacks ]
move tramp_addr to stack
RET -> tramp_addr
tramp_addr:
[..]
call foo_body -> foo_body:
[..]
RET -> back to tramp_addr
[..]
RET -> back to original_caller

I guess that looks balanced.

-- Steve