Re: [PATCH v6 tip 2/8] tracing: attach BPF programs to kprobes

From: Alexei Starovoitov
Date: Thu Mar 12 2015 - 12:18:50 EST

On 3/12/15 8:15 AM, Peter Zijlstra wrote:
On Tue, Mar 10, 2015 at 09:18:48PM -0700, Alexei Starovoitov wrote:
+unsigned int trace_call_bpf(struct bpf_prog *prog, void *ctx)
+ unsigned int ret;
+ int cpu;
+ if (in_nmi()) /* not supported yet */
+ return 1;
+ preempt_disable_notrace();
+ cpu = raw_smp_processor_id();
+ if (unlikely(per_cpu(bpf_prog_active, cpu)++ != 0)) {
+ /* since some bpf program is already running on this cpu,
+ * don't call into another bpf program (same or different)
+ * and don't send kprobe event into ring-buffer,
+ * so return zero here
+ */
+ ret = 0;
+ goto out;
+ }
+ rcu_read_lock();

You've so far tried very hard to not get into tracing; and then you call
rcu_read_lock() :-)

So either document why this isn't a problem, provide
rcu_read_lock_notrace() or switch to RCU-sched and thereby avoid the

I don't see the problem.
I actually do turn on func and func_graph tracers from time to time to
debug bpf core itself. Why would tracing interfere with anything that
this patch is doing? When we're inside tracing processing, we need to
use only _notrace() helpers otherwise recursion will hurt, but this
code is not invoked from there. It's called from kprobe_ftrace_handler|kprobe_int3_handler->kprobe_dispatcher->
kprobe_perf_func->trace_call_bpf which all are perfectly traceable.
Probably my copy paste of preempt_disable_notrace() line from
stack_trace_call() became source of confusion? I believe
normal preempt_disable() here will be just fine.
It's actually redundant too, since preemption is disabled by kprobe
anyway. Please help me understand what I'm missing.

To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at
Please read the FAQ at