[PATCH V3 00/17] Stitch LBR call stack (Perf Tools)
From: kan . liang
Date: Fri Mar 13 2020 - 14:34:30 EST
From: Kan Liang <kan.liang@xxxxxxxxxxxxxxx>
Changes since V2:
- Check strdup() in Patch 1
- Split several patches into smaller patches
Changes since V1:
- Rebase on top of commit 5100c2b77049 ("perf header: Add check for
unexpected use of reserved membrs in event attr")
- Fix compling error with GCC9 in patch 1.
The kernel patches have been merged into linux-next.
commit bbfd5e4fab63 ("perf/core: Add new branch sample type for HW
index of raw branch records")
commit db278b90c326 ("perf/x86/intel: Output LBR TOS information
correctly")
Start from Haswell, Linux perf can utilize the existing Last Branch
Record (LBR) facility to record call stack. However, the depth of the
reconstructed LBR call stack limits to the number of LBR registers.
E.g. on skylake, the depth of reconstructed LBR call stack is <= 32
That's because HW will overwrite the oldest LBR registers when it's
full.
However, the overwritten LBRs may still be retrieved from previous
sample. At that moment, HW hasn't overwritten the LBR registers yet.
Perf tools can stitch those overwritten LBRs on current call stacks to
get a more complete call stack.
To determine if LBRs can be stitched, the maximum number of LBRs is
required. Patch 1 - 4 retrieve the capabilities information from sysfs
and save them in perf header.
Patch 5 - 12 implements the LBR stitching approach.
Users can use the options introduced in patch 13-16 to enable the LBR
stitching approach for perf report, script, top and c2c.
Patch 17 adds a fast path for duplicate entries check. It benefits all
call stack parsing, not just for stitch LBR call stack. It can be
merged independently.
The stitching approach base on LBR call stack technology. The known
limitations of LBR call stack technology still apply to the approach,
e.g. Exception handing such as setjmp/longjmp will have calls/returns
not match.
This approach is not full proof. There can be cases where it creates
incorrect call stacks from incorrect matches. There is no attempt
to validate any matches in another way. So it is not enabled by default.
However in many common cases with call stack overflows it can recreate
better call stacks than the default lbr call stack output. So if there
are problems with LBR overflows this is a possible workaround.
Regression:
Users may collect LBR call stack on a machine with new perf tool and
new kernel (support LBR TOS). However, they may parse the perf.data with
old perf tool (not support LBR TOS). The old tool doesn't check
attr.branch_sample_type. Users probably get incorrect information
without any warning.
Performance impact:
The processing time may increase with the LBR stitching approach
enabled. The impact depends on the increased depth of call stacks.
For a simple test case tchain_edit with 43 depth of call stacks.
perf record --call-graph lbr -- ./tchain_edit
perf report --stitch-lbr
Without --stitch-lbr, perf report only display 32 depth of call stacks.
With --stitch-lbr, perf report can display all 43 depth of call stacks.
The depth of call stacks increase 34.3%.
Correspondingly, the processing time of perf report increases 39%,
Without --stitch-lbr: 11.0 sec
With --stitch-lbr: 15.3 sec
The source code of tchain_edit.c is something similar as below.
noinline void f43(void)
{
int i;
for (i = 0; i < 10000;) {
if(i%2)
i++;
else
i++;
}
}
noinline void f42(void)
{
int i;
for (i = 0; i < 100; i++) {
f43();
f43();
f43();
}
}
noinline void f41(void)
{
int i;
for (i = 0; i < 100; i++) {
f42();
f42();
f42();
}
}
noinline void f40(void)
{
f41();
}
... ...
noinline void f32(void)
{
f33();
}
noinline void f31(void)
{
int i;
for (i = 0; i < 10000; i++) {
if(i%2)
i++;
else
i++;
}
f32();
}
noinline void f30(void)
{
f31();
}
... ...
noinline void f1(void)
{
f2();
}
int main()
{
f1();
}
Kan Liang (17):
perf pmu: Add support for PMU capabilities
perf header: Support CPU PMU capabilities
perf record: Clear HEADER_CPU_PMU_CAPS for non LBR call stack mode
perf stat: Clear HEADER_CPU_PMU_CAPS
perf machine: Remove the indent in resolve_lbr_callchain_sample
perf machine: Refine the function for LBR call stack reconstruction
perf machine: Factor out lbr_callchain_add_kernel_ip()
perf machine: Factor out lbr_callchain_add_lbr_ip()
perf thread: Add a knob for LBR stitch approach
perf tools: Save previous sample for LBR stitching approach
perf tools: Save previous cursor nodes for LBR stitching approach
perf tools: Stitch LBR call stack
perf report: Add option to enable the LBR stitching approach
perf script: Add option to enable the LBR stitching approach
perf top: Add option to enable the LBR stitching approach
perf c2c: Add option to enable the LBR stitching approach
perf hist: Add fast path for duplicate entries check approach
tools/perf/Documentation/perf-c2c.txt | 11 +
tools/perf/Documentation/perf-report.txt | 11 +
tools/perf/Documentation/perf-script.txt | 11 +
tools/perf/Documentation/perf-top.txt | 9 +
.../Documentation/perf.data-file-format.txt | 16 +
tools/perf/builtin-c2c.c | 6 +
tools/perf/builtin-record.c | 3 +
tools/perf/builtin-report.c | 6 +
tools/perf/builtin-script.c | 6 +
tools/perf/builtin-stat.c | 1 +
tools/perf/builtin-top.c | 11 +
tools/perf/util/branch.h | 19 +-
tools/perf/util/callchain.h | 8 +
tools/perf/util/env.h | 3 +
tools/perf/util/header.c | 110 +++++
tools/perf/util/header.h | 1 +
tools/perf/util/hist.c | 23 +
tools/perf/util/machine.c | 423 +++++++++++++++---
tools/perf/util/pmu.c | 98 ++++
tools/perf/util/pmu.h | 12 +
tools/perf/util/sort.c | 2 +-
tools/perf/util/sort.h | 2 +
tools/perf/util/thread.c | 2 +
tools/perf/util/thread.h | 35 ++
tools/perf/util/top.h | 1 +
25 files changed, 760 insertions(+), 70 deletions(-)
--
2.17.1