Re: [RFC PATCH 00/11] bpf, trace, dtrace: DTrace BPF program type implementation and sample use

From: Kris Van Hees
Date: Tue May 21 2019 - 14:47:14 EST


On Tue, May 21, 2019 at 10:56:18AM -0700, Alexei Starovoitov wrote:
> On Mon, May 20, 2019 at 11:47:00PM +0000, Kris Van Hees wrote:
> >
> > 2. bpf: add BPF_PROG_TYPE_DTRACE
> >
> > This patch adds BPF_PROG_TYPE_DTRACE as a new BPF program type, without
> > actually providing an implementation. The actual implementation is
> > added in patch 4 (see below). We do it this way because the
> > implementation is being added to the tracing subsystem as a component
> > that I would be happy to maintain (if merged) whereas the declaration
> > of the program type must be in the bpf subsystem. Since the two
> > subsystems are maintained by different people, we split the
> > implementing patches across maintainer boundaries while ensuring that
> > the kernel remains buildable between patches.
>
> None of these kernel patches are necessary for what you want to achieve.

I disagree. The current support for BPF programs for probes associates a
specific BPF program type with a specific set of probes, which means that I
cannot write BPF programs based on a more general concept of a 'DTrace probe'
and provide functionality based on that. It also means that if I have a D
clause (DTrace probe action code associated with probes) that is to be executed
for a list of probes of different types, I need to duplicate the program
because I cannot cross program type boundaries.

By implementing a program type for DTrace, and making it possible for
tail-calls to be made from various probe-specific program types to the DTrace
program type, I can accomplish what I described above. More details are in
the cover letter and the commit messages of the individual patches.

The reasons for these patches is because I cannot do the same with the existing
implementation. Yes, I can do some of it or use some workarounds to accomplish
kind of the same thing, but at the expense of not being able to do what I need
to do but rather do some kind of best effort alternative. That is not the goal
here.

> Feel free to add tools/dtrace/ directory and maintain it though.

Thank you.

> The new dtrace_buffer doesn't need to replicate existing bpf+kernel functionality
> and no changes are necessary in kernel/events/ring_buffer.c either.
> tools/dtrace/ user space component can use either per-cpu array map
> or hash map as a buffer to store arbitrary data into and use
> existing bpf_perf_event_output() to send it to user space via perf ring buffer.
>
> See, for example, how bpftrace does that.

When using bpf_perf_event_output() you need to construct the sample first,
and then send it off to user space using the perf ring-buffer. That is extra
work that is unnecessary. Also, storing arbitrary data from userspace in maps
is not relevant here because this is about data that is generated at the level
of the kernel and sent to userspace as part of the probe action that is
executed when the probe fires.

Bpftrace indeed uses maps and ways to construct the sample and then uses the
perf ring-buffer to pass data to userspace. And that is not the way DTrace
works and that is not the mechanism that we need here, So, while this may be
satisfactory for bpftrace, it is not for DTrace. We need more fine-grained
control over how we write data to the buffer (doing direct stores from BPF
code) and without the overhead of constructing a complete sample that can just
be handed over to bpf_perf_event_output().

Also, please note that I am not duplicating any kernel functionality when it
comes to buffer handling, and in fact, I found it very easy to be able to
tap into the perf event ring-buffer implementation and add a feature that I
need for DTrace. That was a very pleasant experience for sure!

Kris