Re: [PATCH RFC 1/6] perf/x86: Add perf text poke event

From: Leo Yan
Date: Thu Oct 31 2019 - 03:31:58 EST


[ + Will, Mark ]

On Wed, Oct 30, 2019 at 05:23:25PM +0100, Peter Zijlstra wrote:
> On Wed, Oct 30, 2019 at 10:19:50PM +0800, Leo Yan wrote:
> > On Wed, Oct 30, 2019 at 01:46:59PM +0100, Peter Zijlstra wrote:
> > > On Wed, Oct 30, 2019 at 06:47:47PM +0800, Leo Yan wrote:
>
> > > Anyway, the below argument doesn't care much, it works for NOP/JMP just
> > > fine.
> >
> > We can support NOP/JMP case as the first step, but later should can
> > extend to support other transitions.
>
> Since all instructions (with the possible exception of RET) are
> unconditional branch instructions: NOP, JMP, CALL. It makes no read
> difference to the argument below.
>
> ( I'm thinking RET might be special in that it reads the return address
> from the stack and therefore must emit the whole IP into the stream, as
> we cannot know the stack state )

To be honest, I don't have knowledge what's the exactly format for 'ret'
in CoreSight trace; so would like to leave this to Mike.

Since Mike has confirmed that Arm CoreSight also needs the accurate
code for decoding branch/call instructions, it's no doubt that we need
to implement the same mechanism at here to update DSO for the accurate
code flow. So the question is how to do this on Arm/Arm64 :)

Before move farward, I'd like to step back to describe clearly what's
current problem on Arm64 and check one question for jump label:

I checked the kernel code, both kprobe and ftrace both uses
stop_machine() to alter instructions, since all CPUs run into stop
machine's synchronization, there have no race condition between
instructions transition and CPUs execte the altered instruction; thus
it's safe for kprobe and ftrace to use perf event PERF_TEXT_POKE_UPDATE
to notify instruction transition and can allow us to read out 'correct'
instruction for decoder.

But for jump label, it doesn't use the stop_machine() and perf event
PERF_TEXT_POKE_UPDATE will introduce race condition as below (Let's see
the example for transition from nop to branch):

CPU0 CPU1
NOP instruction
`-> static_key_enable()
`-> aarch64_insn_patch_text_nosync()
`-> perf event PERF_TEXT_POKE_UPDATE
-> Execute nop
instruction
`-> aarch64_insn_write()
`-> __flush_icache_range()

Since x86 platform have INT3 as a mediate state, it can avoid the
race condition between CPU0 (who is do transition) and other CPUs (who
is possible to execute nop/branch).

> > > > we need to update dso cache for the
> > > > 'PERF_TEXT_POKE_UPDATE_PREV' event; if detect the instruction is
> > > > changed from branch to nop, we need to update dso cache for
> > > > 'PERF_TEXT_POKE_UPDATE_POST' event. The main idea is to ensure the
> > > > branch instructions can be safely contained in the dso file and any
> > > > branch samples can read out correct branch instruction.
> > > >
> > > > Could you confirm this is the same with your understanding? Or I miss
> > > > anything? I personally even think the pair events can be used for
> > > > different arches (e.g. the solution can be reused on Arm64/x86, etc).
> > >
> > > So the problem we have with PT is that it is a bit-stream of
> > > branch taken/not-taken decisions. In order to decode that we need to
> > > have an accurate view of the unconditional code flow.
> > >
> > > Both NOP/JMP are unconditional and we need to exactly know which of the
> > > two was encountered.
> >
> > If I understand correctly, PT decoder needs to read out instructions
> > from dso and decide the instruction type (NOP or JMP), and finally
> > generate the accurate code flow.
> >
> > So PT decoder relies on (cached) DSO for decoding. As I know, this
> > might be different from Arm CS, since Arm CS decoder is merely
> > generate packets and it doesn't need to rely on DSO for decoding.
>
> Given a start point (from a start or sync packet) we scan the
> instruction stream forward until the first conditional branch
> instruction. Then we consume the next available branch decision bit to
> know where to continue.
>
> So yes, we need to have a correct text image available for this to work.
>
> > > With your scheme, I don't see how we can ever actually know that. When
> > > we get the PRE event, all we really know is that we're going to change
> > > a specific instruction into another. And at the POST event we know it
> > > has been done. But in between these two events, we have no clue which of
> > > the two instructions is live on which CPU (two CPUs might in fact have a
> > > different live instruction at the same time).
> > >
> > > This means we _cannot_ unambiguously decode a taken/not-taken decision
> > > stream.
> > >
> > > Does CS have this same problem, and how would the PRE/POST events help
> > > with that?
> >
> > My purpose is to use PRE event and POST event to update cached DSO,
> > thus perf tool can read out 'correct' instructions and fill them into
> > instruction/branch samples.
>
> The thing is, as I argued, the instruction state between PRE and POST is
> ambiguous. This makes it impossible to decode the branch decision
> stream.
>
> Suppose CPU0 emits the PRE event at T1 and the POST event at T5, but we
> have CPU1 covering the instruction at T3.
>
> How do you decide where CPU1 goes and what the next conditional branch
> is?

Sorry for my not well thought.

I agree that T3 is an uncertain state with below flow:

CPU0 CPU1
perf event PERF_TEXT_POKE_UPDATE_PRE -> T1

Int3 / NOP -> T3

Int3 / branch -> T3'

perf event PERF_TEXT_POKE_UPDATE_POST -> T5

Except if the trace has extra info and can use old/new instructions
combination for analysis, otherwise PRE/POST pair events aren't helpful
for resolve this issue (if trace decoder can do this, then the change in
kernel will be much simpler).

Below are two potential options we can use on Arm64 platform:

- Change to use stop_machine() for jump label; this might introduce
performance issue if jump label is altered frequently.

To mitigate the impaction, we can only use stop_machine() when
detect the perf events are enabled, otherwise will rollback to use
the old code path.

- We can use breakpoint to emulate the similiar flow with x86's int3,
thus we can dismiss the race condition between one CPU alters
instruction and other CPUs run into the alternative instruction.

@Will, @Mark, could you help review this? Appreciate any comments
and suggestions. And please let me know if you want to consolidate
related works with your side (or as you know if there have ongoing
discussion or someone works on this).

Thanks,
Leo Yan