Re: [PATCH v3] kretprobe: percpu support

From: Luigi Rizzo
Date: Fri Feb 21 2020 - 16:23:42 EST


On Tue, Feb 18, 2020 at 3:50 AM Masami Hiramatsu <mhiramat@xxxxxxxxxx> wrote:
>
> On Tue, 18 Feb 2020 01:39:40 -0800
> Luigi Rizzo <lrizzo@xxxxxxxxxx> wrote:
>
> > On Mon, Feb 17, 2020 at 11:55 PM Masami Hiramatsu <mhiramat@xxxxxxxxxx> wrote:
> > >
> > > Hi Luigi,
> > >
> > > On Mon, 17 Feb 2020 16:56:59 -0800
> > > Luigi Rizzo <lrizzo@xxxxxxxxxx> wrote:
> > >
> > > > kretprobe uses a list protected by a single lock to allocate a
> > > > kretprobe_instance in pre_handler_kretprobe(). This works poorly with
> > > > concurrent calls.
> > >
> > > Yes, there are several potential performance issue and the recycle
> > > instance is one of them. However, I think this spinlock is not so racy,
> > > but noisy (especially on many core machine) right?
> >
> > correct, it is especially painful on 2+ sockets and many-core systems
> > when attaching kretprobes on otherwise uncontended paths.
> >
> > >
> > > Racy lock is the kretprobe_hash_lock(), I would like to replace it
> > > with ftrace's per-task shadow stack. But that will be available
> > > only if CONFIG_FUNCTION_GRAPH_TRACER=y (and instance has no own
> > > payload).
> > >
> > > > This patch offers a simplified fix: the percpu_instance flag indicates
> > > > that we allocate one instance per CPU, and the allocation is contention
> > > > free, but we allow only have one pending entry per CPU (this could be
> > > > extended to a small constant number without much trouble).
> > >
> > > OK, the percpu instance idea is good to me, and I think it should be
> > > default option. Unless user specifies the number of instances, it should
> > > choose percpu instance by default.
> >
> > That was my initial implementation, which would not even need the
> > percpu_instance
> > flag in struct kretprobe. However, I felt that changing the default
> > would have subtle
> > side effects (e.g., only one outstanding call per CPU) so I thought it
> > would be better
> > to leave the default unchanged and make the flag explicit.
> >
> > > Moreover, this makes things a bit complicated, can you add per-cpu
> > > instance array? If it is there, we can remove the old recycle rp insn
> > > code.
> >
> > Can you clarify what you mean by "per-cpu instance array" ?
> > Do you mean allowing multiple outstanding entries per cpu?
>
> Yes, either allocating it on percpu area or allocating arraies
> on percpu pointer is OK. e.g.
>
> instance_size = sizeof(*rp->pcpu) + rp->data_size;
> rp->pcpu = __alloc_percpu(instance_size * array_size,
> __alignof__(*rp->pcpu));
>
> And we will search free ri on the percpu array by checking ri->rp == NULL.

I have posted a v4 patch with the refactoring you suggested, but
still defaulting to non percpu allocation, and only one entry per cpu.
The former to avoid potential regressions, the latter because I worry
that the search in the array may incur several cache misses especially
if the traced function is allowed to block or the caller can migrate.
(Maybe I am over cautious, but I want to measure that cost first;
once that is clear perhaps we can move forward with another patch
that defaults to percpu and removes the reclaim code).

cheers
luigi