Re: [PATCH v2 bpf-next 1/4] perf: export get/put_chain_entry()

From: Song Liu
Date: Fri Jun 26 2020 - 17:39:08 EST




> On Jun 26, 2020, at 1:06 PM, Andrii Nakryiko <andrii.nakryiko@xxxxxxxxx> wrote:
>
> On Fri, Jun 26, 2020 at 5:10 AM Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote:
>>
>> On Thu, Jun 25, 2020 at 05:13:29PM -0700, Song Liu wrote:
>>> This would be used by bpf stack mapo.
>>
>> Would it make sense to sanitize the API a little before exposing it?
>>
>> diff --git a/kernel/events/callchain.c b/kernel/events/callchain.c
>> index 334d48b16c36..016894b0d2c2 100644
>> --- a/kernel/events/callchain.c
>> +++ b/kernel/events/callchain.c
>> @@ -159,8 +159,10 @@ static struct perf_callchain_entry *get_callchain_entry(int *rctx)
>> return NULL;
>>
>> entries = rcu_dereference(callchain_cpus_entries);
>> - if (!entries)
>> + if (!entries) {
>> + put_recursion_context(this_cpu_ptr(callchain_recursion), rctx);
>> return NULL;
>> + }
>>
>> cpu = smp_processor_id();
>>
>> @@ -183,12 +185,9 @@ get_perf_callchain(struct pt_regs *regs, u32 init_nr, bool kernel, bool user,
>> int rctx;
>>
>> entry = get_callchain_entry(&rctx);
>> - if (rctx == -1)
>> + if (!entry || rctx == -1)
>> return NULL;
>>
>
> isn't rctx == -1 check here not necessary anymore? Seems like
> get_callchain_entry() will always return NULL if rctx == -1?

Yes, looks like we only need to check entry.

Thanks,
Song