Re: [RFC PATCH bpf-next v2 1/4] tracing/kprobe: bpf: Check error injectable event is on function entry
From: Alexei Starovoitov
Date: Tue Dec 26 2017 - 20:57:49 EST
On Tue, Dec 26, 2017 at 04:46:59PM +0900, Masami Hiramatsu wrote:
> Check whether error injectable event is on function entry or not.
> Currently it checks the event is ftrace-based kprobes or not,
> but that is wrong. It should check if the event is on the entry
> of target function. Since error injection will override a function
> to just return with modified return value, that operation must
> be done before the target function starts making stackframe.
>
> As a side effect, bpf error injection is no need to depend on
> function-tracer. It can work with sw-breakpoint based kprobe
> events too.
>
> Signed-off-by: Masami Hiramatsu <mhiramat@xxxxxxxxxx>
> ---
> kernel/trace/Kconfig | 2 --
> kernel/trace/bpf_trace.c | 6 +++---
> kernel/trace/trace_kprobe.c | 8 +++++---
> kernel/trace/trace_probe.h | 12 ++++++------
> 4 files changed, 14 insertions(+), 14 deletions(-)
>
> diff --git a/kernel/trace/Kconfig b/kernel/trace/Kconfig
> index ae3a2d519e50..6400e1bf97c5 100644
> --- a/kernel/trace/Kconfig
> +++ b/kernel/trace/Kconfig
> @@ -533,9 +533,7 @@ config FUNCTION_PROFILER
> config BPF_KPROBE_OVERRIDE
> bool "Enable BPF programs to override a kprobed function"
> depends on BPF_EVENTS
> - depends on KPROBES_ON_FTRACE
> depends on HAVE_KPROBE_OVERRIDE
> - depends on DYNAMIC_FTRACE_WITH_REGS
> default n
> help
> Allows BPF to override the execution of a probed function and
> diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> index f6d2327ecb59..d663660f8392 100644
> --- a/kernel/trace/bpf_trace.c
> +++ b/kernel/trace/bpf_trace.c
> @@ -800,11 +800,11 @@ int perf_event_attach_bpf_prog(struct perf_event *event,
> int ret = -EEXIST;
>
> /*
> - * Kprobe override only works for ftrace based kprobes, and only if they
> - * are on the opt-in list.
> + * Kprobe override only works if they are on the function entry,
> + * and only if they are on the opt-in list.
> */
> if (prog->kprobe_override &&
> - (!trace_kprobe_ftrace(event->tp_event) ||
> + (!trace_kprobe_on_func_entry(event->tp_event) ||
> !trace_kprobe_error_injectable(event->tp_event)))
> return -EINVAL;
>
> diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c
> index 91f4b57dab82..265e3e27e8dc 100644
> --- a/kernel/trace/trace_kprobe.c
> +++ b/kernel/trace/trace_kprobe.c
> @@ -88,13 +88,15 @@ static nokprobe_inline unsigned long trace_kprobe_nhit(struct trace_kprobe *tk)
> return nhit;
> }
>
> -int trace_kprobe_ftrace(struct trace_event_call *call)
> +bool trace_kprobe_on_func_entry(struct trace_event_call *call)
> {
> struct trace_kprobe *tk = (struct trace_kprobe *)call->data;
> - return kprobe_ftrace(&tk->rp.kp);
> +
> + return kprobe_on_func_entry(tk->rp.kp.addr, tk->rp.kp.symbol_name,
> + tk->rp.kp.offset);
That would be nice, but did you test this?
My understanding that kprobe will restore all regs and
here we need to override return ip _and_ value.
Could you add a patch with the test the way Josef did
or describe the steps to test this new mode?