Re: [PATCH] kernel/trace: Add TRACING_ALLOW_PRINTK config option
From: Alexei Starovoitov
Date: Tue Jun 30 2020 - 01:17:06 EST
On Sun, Jun 28, 2020 at 07:43:34PM -0400, Steven Rostedt wrote:
> On Sun, 28 Jun 2020 18:28:42 -0400
> Steven Rostedt <rostedt@xxxxxxxxxxx> wrote:
>
> > You create a bpf event just like you create any other event. When a bpf
> > program that uses a bpf_trace_printk() is loaded, you can enable that
> > event from within the kernel. Yes, there's internal interfaces to
> > enabled and disable events just like echoing 1 into
> > tracefs/events/system/event/enable. See trace_set_clr_event().
>
> I just started playing with what the code would look like and have
> this. It can be optimized with per-cpu sets of buffers to remove the
> spin lock. I also didn't put in the enabling of the event, but I'm sure
> you can figure that out.
>
> Warning, not even compiled tested.
Thanks! I see what you mean now.
>
> -- Steve
>
> diff --git a/kernel/trace/Makefile b/kernel/trace/Makefile
> index 6575bb0a0434..aeba5ee7325a 100644
> --- a/kernel/trace/Makefile
> +++ b/kernel/trace/Makefile
> @@ -31,6 +31,8 @@ ifdef CONFIG_GCOV_PROFILE_FTRACE
> GCOV_PROFILE := y
> endif
>
> +CFLAGS_bpf_trace.o := -I$(src)
not following. why this is needed?
> +
> CFLAGS_trace_benchmark.o := -I$(src)
> CFLAGS_trace_events_filter.o := -I$(src)
>
> diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> index dc05626979b8..01bedf335b2e 100644
> --- a/kernel/trace/bpf_trace.c
> +++ b/kernel/trace/bpf_trace.c
> @@ -19,6 +19,9 @@
> #include "trace_probe.h"
> #include "trace.h"
>
> +#define CREATE_TRACE_EVENTS
CREATE_TRACE_POINTS ?
> +#include "bpf_trace.h"
> +
> #define bpf_event_rcu_dereference(p) \
> rcu_dereference_protected(p, lockdep_is_held(&bpf_event_mutex))
>
> @@ -473,13 +476,29 @@ BPF_CALL_5(bpf_trace_printk, char *, fmt, u32, fmt_size, u64, arg1,
> fmt_cnt++;
> }
>
> +static DEFINE_SPINLOCK(trace_printk_lock);
> +#define BPF_TRACE_PRINTK_SIZE 1024
> +
> +static inline void do_trace_printk(const char *fmt, ...)
> +{
> + static char buf[BPF_TRACE_PRINT_SIZE];
> + unsigned long flags;
> +
> + spin_lock_irqsave(&trace_printk_lock, flags);
> + va_start(ap, fmt);
> + vsnprintf(buf, BPF_TRACE_PRINT_SIZE, fmt, ap);
> + va_end(ap);
> +
> + trace_bpf_trace_printk(buf);
> + spin_unlock_irqrestore(&trace_printk_lock, flags);
interesting. I don't think anyone would care about spin_lock overhead.
It's better because 'trace_bpf_trace_printk' would be a separate event
that can be individually enabled/disabled?
I guess it can work.
Thanks!