Re: Getting empty callchain from perf_callchain_kernel()
From: Kairui Song
Date: Thu May 23 2019 - 10:53:29 EST
On Thu, May 23, 2019 at 9:32 PM Josh Poimboeuf <jpoimboe@xxxxxxxxxx> wrote:
>
> On Thu, May 23, 2019 at 02:48:11PM +0800, Kairui Song wrote:
> > On Thu, May 23, 2019 at 7:46 AM Josh Poimboeuf <jpoimboe@xxxxxxxxxx> wrote:
> > >
> > > On Wed, May 22, 2019 at 12:45:17PM -0500, Josh Poimboeuf wrote:
> > > > On Wed, May 22, 2019 at 02:49:07PM +0000, Alexei Starovoitov wrote:
> > > > > The one that is broken is prog_tests/stacktrace_map.c
> > > > > There we attach bpf to standard tracepoint where
> > > > > kernel suppose to collect pt_regs before calling into bpf.
> > > > > And that's what bpf_get_stackid_tp() is doing.
> > > > > It passes pt_regs (that was collected before any bpf)
> > > > > into bpf_get_stackid() which calls get_perf_callchain().
> > > > > Same thing with kprobes, uprobes.
> > > >
> > > > Is it trying to unwind through ___bpf_prog_run()?
> > > >
> > > > If so, that would at least explain why ORC isn't working. Objtool
> > > > currently ignores that function because it can't follow the jump table.
> > >
> > > Here's a tentative fix (for ORC, at least). I'll need to make sure this
> > > doesn't break anything else.
> > >
> > > diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
> > > index 242a643af82f..1d9a7cc4b836 100644
> > > --- a/kernel/bpf/core.c
> > > +++ b/kernel/bpf/core.c
> > > @@ -1562,7 +1562,6 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn, u64 *stack)
> > > BUG_ON(1);
> > > return 0;
> > > }
> > > -STACK_FRAME_NON_STANDARD(___bpf_prog_run); /* jump table */
> > >
> > > #define PROG_NAME(stack_size) __bpf_prog_run##stack_size
> > > #define DEFINE_BPF_PROG_RUN(stack_size) \
> > > diff --git a/tools/objtool/check.c b/tools/objtool/check.c
> > > index 172f99195726..2567027fce95 100644
> > > --- a/tools/objtool/check.c
> > > +++ b/tools/objtool/check.c
> > > @@ -1033,13 +1033,6 @@ static struct rela *find_switch_table(struct objtool_file *file,
> > > if (text_rela->type == R_X86_64_PC32)
> > > table_offset += 4;
> > >
> > > - /*
> > > - * Make sure the .rodata address isn't associated with a
> > > - * symbol. gcc jump tables are anonymous data.
> > > - */
> > > - if (find_symbol_containing(rodata_sec, table_offset))
> > > - continue;
> > > -
> > > rodata_rela = find_rela_by_dest(rodata_sec, table_offset);
> > > if (rodata_rela) {
> > > /*
> >
> > Hi Josh, this still won't fix the problem.
> >
> > Problem is not (or not only) with ___bpf_prog_run, what actually went
> > wrong is with the JITed bpf code.
>
> There seem to be a bunch of issues. My patch at least fixes the failing
> selftest reported by Alexei for ORC.
>
> How can I recreate your issue?
Hmm, I used bcc's example to attach bpf to trace point, and with that
fix stack trace is still invalid.
CMD I used with bcc:
python3 ./tools/stackcount.py t:sched:sched_fork
And I just had another try applying your patch, self test is also failing.
I'm applying on my local master branch, a few days older than
upstream, I can update and try again, am I missing anything?
>
> > For frame pointer unwinder, it seems the JITed bpf code will have a
> > shifted "BP" register? (arch/x86/net/bpf_jit_comp.c:217), so if we can
> > unshift it properly then it will work.
>
> Yeah, that looks like a frame pointer bug in emit_prologue().
>
> > I tried below code, and problem is fixed (only for frame pointer
> > unwinder though). Need to find a better way to detect and do any
> > similar trick for bpf part, if this is a feasible way to fix it:
> >
> > diff --git a/arch/x86/kernel/unwind_frame.c b/arch/x86/kernel/unwind_frame.c
> > index 9b9fd4826e7a..2c0fa2aaa7e4 100644
> > --- a/arch/x86/kernel/unwind_frame.c
> > +++ b/arch/x86/kernel/unwind_frame.c
> > @@ -330,8 +330,17 @@ bool unwind_next_frame(struct unwind_state *state)
> > }
> >
> > /* Move to the next frame if it's safe: */
> > - if (!update_stack_state(state, next_bp))
> > - goto bad_address;
> > + if (!update_stack_state(state, next_bp)) {
> > + // Try again with shifted BP
> > + state->bp += 5; // see AUX_STACK_SPACE
> > + next_bp = (unsigned long
> > *)READ_ONCE_TASK_STACK(state->task, *state->bp);
> > + // Clean and refetch stack info, it's marked as error outed
> > + state->stack_mask = 0;
> > + get_stack_info(next_bp, state->task,
> > &state->stack_info, &state->stack_mask);
> > + if (!update_stack_state(state, next_bp)) {
> > + goto bad_address;
> > + }
> > + }
> >
> > return true;
>
> Nack.
>
> > For ORC unwinder, I think the unwinder can't find any info about the
> > JITed part. Maybe if can let it just skip the JITed part and go to
> > kernel context, then should be good enough.
>
> If it's starting from a fake pt_regs then that's going to be a
> challenge.
>
> Will the JIT code always have the same stack layout? If so then we
> could hard code that knowledge in ORC. Or even better, create a generic
> interface for ORC to query the creator of the generated code about the
> stack layout.
I think yes.
Not sure why we have the BP shift yet, if the prolog code could be
tweaked to work with frame pointer unwinder it will be good to have.
But still not for ORC.
Will it be a good idea to have a region reserved for the JITed code?
Currently it shares the region with "module mapping space". If let it
have a separate region, when the unwinder meet any code in that region
it will know it's JITed code and then can do something special about
it.
This should make it much easier for both frame pointer and ORC unwinder to work.
--
Best Regards,
Kairui Song