Re: [PATCH bpf-next v8] selftests/bpf: trace_helpers.c: optimize kallsyms cache
From: Rong Tao
Date: Sun Aug 27 2023 - 21:02:03 EST
Hi, jirka. Thanks for your reply.
> > @@ -164,13 +165,14 @@ int main(int argc, char **argv)
> > }
> >
> > /* initialize kernel symbol translation */
> > - if (load_kallsyms()) {
> > + ksyms = load_kallsyms();
>
> if we keep the load_kallsyms/ksym_search/ksym_get_addr functions as described
> in [1] the samples/bpf would stay untouched apart from the Makefile change
Maybe we should make this modification, wouldn't it be better? After all,
not modifying the source code of samples/bpf is not really a reason not to
make modifications to load_kallsyms(), what do you think?
In addition, if we continue to keep the original ksym_search() interface,
the following problems are very difficult to deal with:
Source code ksym_search [1]
struct ksym *ksym_search(long key)
{
int start = 0, end = sym_cnt;
int result;
/* kallsyms not loaded. return NULL */
if (sym_cnt <= 0)
return NULL;
while (start < end) {
size_t mid = start + (end - start) / 2;
result = key - syms[mid].addr;
if (result < 0)
end = mid;
else if (result > 0)
start = mid + 1;
else
return &syms[mid]; <<<
}
if (start >= 1 && syms[start - 1].addr < key &&
key < syms[start].addr)
/* valid ksym */
return &syms[start - 1]; <<<
/* out of range. return _stext */
return &syms[0]; <<<
}
The original ksym_search() interface directly returns the global syms
address, which is also dangerous for multi-threading. If we allocate new
memory for this, it is not a perfect solution.
If we rewrite
struct ksym *ksym_search(long key)
to
struct ksym ksym_search(long key)
This also affects the source code in samples/bpf.
The same problem exists with ksym_get_addr().
Best wishes,
Rong Tao
[1] https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git/tree/tools/testing/selftests/bpf/trace_helpers.c#n100