On Mon, Oct 3, 2022 at 7:16 PM Ian Rogers <irogers@xxxxxxxxxx> wrote:
For consistency with:I wrote up a little example of performing a top-down analysis for the
https://github.com/intel/perfmon-metrics
rename of topdown TMA metrics from Frontend_Bound to tma_frontend_bound.
Remove _SMT suffix metrics are dropped as the #SMT_On and #EBS_Mode
are correctly expanded in the single main metric. Fix perf expr to
allow a double if to be correctly processed.
Add all 6 levels of TMA metrics. Child metrics are placed in a group
named after their parent allowing children of a metric to be
easily measured using the metric name with a _group suffix.
Don't drop TMA metrics if they contain topdown events.
The ## and ##? operators are correctly expanded.
The locate-with column is added to the long description describing a
sampling event.
Metrics are written in terms of other metrics to reduce the expression
size and increase readability.
Following this the pmu-events/arch/x86 directories match those created
by the script at:
https://github.com/intel/event-converter-for-linux-perf/blob/master/download_and_gen.py
with updates at:
https://github.com/captain5050/event-converter-for-linux-perf
v3. Fix a parse metrics test failure due to making metrics referring
to other metrics case sensitive - make the cases in the test
metric match.
v2. Fixes commit message wrt missing mapfile.csv updates as noted by
Zhengjun Xing <zhengjun.xing@xxxxxxxxxxxxxxx>. ScaleUnit is added
for TMA metrics. Metrics with topdown events have have a missing
slots event added if necessary. The latest metrics at:
https://github.com/intel/perfmon-metrics are used, however, the
event-converter-for-linux-perf scripts now prefer their own
metrics in case of mismatched units when a metric is written in
terms of another. Additional testing was performed on broadwell,
broadwellde, cascadelakex, haswellx, sapphirerapids and tigerlake
CPUs.
perf wiki here:
https://perf.wiki.kernel.org/index.php/Top-Down_Analysis