Re: [PATCH 1/8] perf/kprobe: Add support to create multiple probes

From: Andrii Nakryiko
Date: Fri Dec 10 2021 - 13:28:55 EST


On Fri, Dec 10, 2021 at 4:42 AM Jiri Olsa <jolsa@xxxxxxxxxx> wrote:
>
> On Wed, Dec 08, 2021 at 02:50:09PM +0100, Jiri Olsa wrote:
> > On Mon, Dec 06, 2021 at 07:15:58PM -0800, Andrii Nakryiko wrote:
> > > On Wed, Dec 1, 2021 at 1:32 PM Jiri Olsa <jolsa@xxxxxxxxxx> wrote:
> > > >
> > > > On Tue, Nov 30, 2021 at 10:53:58PM -0800, Andrii Nakryiko wrote:
> > > > > On Wed, Nov 24, 2021 at 12:41 AM Jiri Olsa <jolsa@xxxxxxxxxx> wrote:
> > > > > >
> > > > > > Adding support to create multiple probes within single perf event.
> > > > > > This way we can associate single bpf program with multiple kprobes,
> > > > > > because bpf program gets associated with the perf event.
> > > > > >
> > > > > > The perf_event_attr is not extended, current fields for kprobe
> > > > > > attachment are used for multi attachment.
> > > > >
> > > > > I'm a bit concerned with complicating perf_event_attr further to
> > > > > support this multi-attach. For BPF, at least, we now have
> > > > > bpf_perf_link and corresponding BPF_LINK_CREATE command in bpf()
> > > > > syscall which allows much simpler and cleaner API to do this. Libbpf
> > > > > will actually pick bpf_link-based attachment if kernel supports it. I
> > > > > think we should better do bpf_link-based approach from the get go.
> > > > >
> > > > > Another thing I'd like you to keep in mind and think about is BPF
> > > > > cookie. Currently kprobe/uprobe/tracepoint allow to associate
> > > > > arbitrary user-provided u64 value which will be accessible from BPF
> > > > > program with bpf_get_attach_cookie(). With multi-attach kprobes this
> > > > > because extremely crucial feature to support, otherwise it's both
> > > > > expensive, inconvenient and complicated to be able to distinguish
> > > > > between different instances of the same multi-attach kprobe
> > > > > invocation. So with that, what would be the interface to specify these
> > > > > BPF cookies for this multi-attach kprobe, if we are going through
> > > > > perf_event_attr. Probably picking yet another unused field and
> > > > > union-izing it with a pointer. It will work, but makes the interface
> > > > > even more overloaded. While for LINK_CREATE we can just add another
> > > > > pointer to a u64[] with the same size as number of kfunc names and
> > > > > offsets.
> > > >
> > > > I'm not sure we could bypass perf event easily.. perhaps introduce
> > > > BPF_PROG_TYPE_RAW_KPROBE as we did for tracepoints or just new
> > > > type for multi kprobe attachment like BPF_PROG_TYPE_MULTI_KPROBE
> > > > that might be that way we'd have full control over the API
> > >
> > > Sure, new type works.
> > >
> > > >
> > > > >
> > > > > But other than that, I'm super happy that you are working on these
> > > > > complicated multi-attach capabilities! It would be great to benchmark
> > > > > one-by-one attachment vs multi-attach to the same set of kprobes once
> > > > > you arrive at the final implementation.
> > > >
> > > > I have the change for bpftrace to use this and even though there's
> > > > some speed up, it's not as substantial as for trampolines
> > > >
> > > > looks like we 'only' got rid of the multiple perf syscall overheads,
> > > > compared to rcu syncs timeouts like we eliminated for trampolines
> > >
> > > if it's just eliminating a pretty small overhead of multiple syscalls,
> > > then it would be quite disappointing to add a bunch of complexity just
> > > for that.
> >
> > I meant it's not as huge save as for trampolines, but I expect some
> > noticeable speedup, I'll make more becnhmarks with current patchset
>
> so with this approach there's noticable speedup, but it's not the
> 'instant attachment speed' as for trampolines
>
> as a base I used bpftrace with change that allows to reuse bpf program
> for multiple kprobes
>
> bpftrace standard attach of 672 kprobes:
>
> Performance counter stats for './src/bpftrace -vv -e kprobe:kvm* { @[kstack] += 1; } i:ms:10 { printf("KRAVA\n"); exit() }':
>
> 70.548897815 seconds time elapsed
>
> 0.909996000 seconds user
> 50.622834000 seconds sys
>
>
> bpftrace using interface from this patchset attach of 673 kprobes:
>
> Performance counter stats for './src/bpftrace -vv -e kprobe:kvm* { @[kstack] += 1; } i:ms:10 { printf("KRAVA\n"); exit() }':
>
> 36.947586803 seconds time elapsed
>
> 0.272585000 seconds user
> 30.900831000 seconds sys
>
>
> so it's noticeable, but I wonder it's not enough ;-)

Typical retsnoop run for BPF use case is attaching to ~1200 functions.
Answer for yourself if you think the tool that takes 36 seconds to
start up is a great user experience? ;)

>
> jirka
>
> >
> > > Are there any reasons we can't use the same low-level ftrace
> > > batch attach API to speed this up considerably? I assume it's only
> > > possible if kprobe is attached at the beginning of the function (not
> > > sure how kretprobe is treated here), so we can either say that this
> > > new kprobe prog type can only be attached at the beginning of each
> > > function and enforce that (probably would be totally reasonable
> > > assumption as that's what's happening most frequently in practice).
> > > Worst case, should be possible to split all requested attach targets
> > > into two groups, one fast at function entry and all the rest.
> > >
> > > Am I too far off on this one? There might be some more complications
> > > that I don't see.
> >
> > I'd need to check more on kprobes internals, but.. ;-)
> >
> > the new ftrace interface is special for 'direct' trampolines and
> > I think that although kprobes can use ftrace for attaching, they
> > use it in a different way
> >
> > also this current 'multi attach' approach is on top of current kprobe
> > interface, if we wanted to use the new ftrace API we'd need to add new
> > kprobe interface and change the kprobe attaching to use it (for cases
> > it's attached at the function entry)
> >
> > jirka
> >
> > >
> > > >
> > > > I'll make full benchmarks once we have some final solution
> > > >
> > > > jirka
> > > >
> > >
>