Re: [RFC PATCH bpf-next v2 2/2] bpf: Pass external callchain entry to get_perf_callchain
From: Alexei Starovoitov
Date: Tue Oct 14 2025 - 11:02:20 EST
On Tue, Oct 14, 2025 at 5:14 AM Jiri Olsa <olsajiri@xxxxxxxxx> wrote:
>
> On Tue, Oct 14, 2025 at 06:01:28PM +0800, Tao Chen wrote:
> > As Alexei noted, get_perf_callchain() return values may be reused
> > if a task is preempted after the BPF program enters migrate disable
> > mode. Drawing on the per-cpu design of bpf_perf_callchain_entries,
> > stack-allocated memory of bpf_perf_callchain_entry is used here.
> >
> > Signed-off-by: Tao Chen <chen.dylane@xxxxxxxxx>
> > ---
> > kernel/bpf/stackmap.c | 19 +++++++++++--------
> > 1 file changed, 11 insertions(+), 8 deletions(-)
> >
> > diff --git a/kernel/bpf/stackmap.c b/kernel/bpf/stackmap.c
> > index 94e46b7f340..acd72c021c0 100644
> > --- a/kernel/bpf/stackmap.c
> > +++ b/kernel/bpf/stackmap.c
> > @@ -31,6 +31,11 @@ struct bpf_stack_map {
> > struct stack_map_bucket *buckets[] __counted_by(n_buckets);
> > };
> >
> > +struct bpf_perf_callchain_entry {
> > + u64 nr;
> > + u64 ip[PERF_MAX_STACK_DEPTH];
> > +};
> > +
> > static inline bool stack_map_use_build_id(struct bpf_map *map)
> > {
> > return (map->map_flags & BPF_F_STACK_BUILD_ID);
> > @@ -305,6 +310,7 @@ BPF_CALL_3(bpf_get_stackid, struct pt_regs *, regs, struct bpf_map *, map,
> > bool user = flags & BPF_F_USER_STACK;
> > struct perf_callchain_entry *trace;
> > bool kernel = !user;
> > + struct bpf_perf_callchain_entry entry = { 0 };
>
> so IIUC having entries on stack we do not need to do preempt_disable
> you had in the previous version, right?
>
> I saw Andrii's justification to have this on the stack, I think it's
> fine, but does it have to be initialized? it seems that only used
> entries are copied to map
No. We're not adding 1k stack consumption.
pw-bot: cr