Re: [PATCH] perf record: add a shortcut for metrics
From: Liang, Kan
Date: Wed May 29 2024 - 15:41:32 EST
On 2024-05-29 11:15 a.m., Guilherme Amadio wrote:
> Hi Arnaldo,
>
> On Tue, May 28, 2024 at 08:20:05PM +0200, Arnaldo Carvalho de Melo wrote:
>> On Tue, May 28, 2024 at 11:55:00AM -0400, Liang, Kan wrote:
>>> On 2024-05-28 7:57 a.m., Artem Savkov wrote:
>>>> On Mon, May 27, 2024 at 10:01:37PM -0700, Ian Rogers wrote:
>>>>> On Mon, May 27, 2024 at 10:46 AM Arnaldo Carvalho de Melo
>>>>> <acme@xxxxxxxxxx> wrote:
>>>>>>
>>>>>> On Mon, May 27, 2024 at 02:28:32PM -0300, Arnaldo Carvalho de Melo wrote:
>>>>>>> On Mon, May 27, 2024 at 02:04:54PM -0300, Arnaldo Carvalho de Melo wrote:
>>>>>>>> On Mon, May 27, 2024 at 02:02:33PM -0300, Arnaldo Carvalho de Melo wrote:
>>>>>>>>> On Mon, May 27, 2024 at 12:15:19PM +0200, Artem Savkov wrote:
>>>>>>>>>> Add -M/--metrics option to perf-record providing a shortcut to record
>>>>>>>>>> metrics and metricgroups. This option mirrors the one in perf-stat.
>>>>>>>
>>>>>>>>>> Suggested-by: Arnaldo Carvalho de Melo <acme@xxxxxxxxxx>
>>>>>>>>>> Signed-off-by: Artem Savkov <asavkov@xxxxxxxxxx>
>>>>>>
>>>>>>> How did you test this?
>>>>>>
>>>>>>> The idea, from my notes, was to be able to have extra columns in 'perf
>>>>>>> report' with things like IPC and other metrics, probably not all metrics
>>>>>>> will apply. We need to find a way to find out which ones are OK for that
>>>>>>> purpose, for instance:
>>>>>>
>>>>>> One that may make sense:
>>>>>>
>>>>>> root@number:~# perf record -M tma_fb_full
>>>>>> ^C[ perf record: Woken up 1 times to write data ]
>>>>>> [ perf record: Captured and wrote 3.846 MB perf.data (21745 samples) ]
>>>>>>
>>>>>> root@number:~# perf evlist
>>>>>> cpu_core/CPU_CLK_UNHALTED.THREAD/
>>>>>> cpu_core/L1D_PEND_MISS.FB_FULL/
>>>>>> dummy:u
>>>>>> root@number:~#
>>>>>>
>>>>>> But then we need to read both to do the math, maybe something like:
>>>>>>
>>>>>> root@number:~# perf record -e '{cpu_core/CPU_CLK_UNHALTED.THREAD/,cpu_core/L1D_PEND_MISS.FB_FULL/}:S'
>>>>>> ^C[ perf record: Woken up 40 times to write data ]
>>>>>> [ perf record: Captured and wrote 14.640 MB perf.data (219990 samples) ]
>>>>>>
>>>>>> root@number:~# perf script | head
>>>>>> cc1plus 1339704 [000] 36028.995981: 2011389 cpu_core/CPU_CLK_UNHALTED.THREAD/: 1097303 [unknown] (/usr/libexec/gcc/x86_64-pc-linux-gnu/13/cc1plus)
>>>>>> cc1plus 1339704 [000] 36028.995981: 26231 cpu_core/L1D_PEND_MISS.FB_FULL/: 1097303 [unknown] (/usr/libexec/gcc/x86_64-pc-linux-gnu/13/cc1plus)
>>>>>> cc1plus 1340011 [001] 36028.996008: 2004568 cpu_core/CPU_CLK_UNHALTED.THREAD/: 8c23b4 [unknown] (/usr/libexec/gcc/x86_64-pc-linux-gnu/13/cc1plus)
>>>>>> cc1plus 1340011 [001] 36028.996008: 20113 cpu_core/L1D_PEND_MISS.FB_FULL/: 8c23b4 [unknown] (/usr/libexec/gcc/x86_64-pc-linux-gnu/13/cc1plus)
>>>>>> clang 1340462 [002] 36028.996043: 2007356 cpu_core/CPU_CLK_UNHALTED.THREAD/: ffffffffb43b045d release_pages+0x3dd ([kernel.kallsyms])
>>>>>> clang 1340462 [002] 36028.996043: 23481 cpu_core/L1D_PEND_MISS.FB_FULL/: ffffffffb43b045d release_pages+0x3dd ([kernel.kallsyms])
>>>>>> cc1plus 1339622 [003] 36028.996066: 2004148 cpu_core/CPU_CLK_UNHALTED.THREAD/: 760874 [unknown] (/usr/libexec/gcc/x86_64-pc-linux-gnu/13/cc1plus)
>>>>>> cc1plus 1339622 [003] 36028.996066: 31935 cpu_core/L1D_PEND_MISS.FB_FULL/: 760874 [unknown] (/usr/libexec/gcc/x86_64-pc-linux-gnu/13/cc1plus)
>>>>>> as 1340513 [004] 36028.996097: 2005052 cpu_core/CPU_CLK_UNHALTED.THREAD/: ffffffffb4491d65 __count_memcg_events+0x55 ([kernel.kallsyms])
>>>>>> as 1340513 [004] 36028.996097: 45084 cpu_core/L1D_PEND_MISS.FB_FULL/: ffffffffb4491d65 __count_memcg_events+0x55 ([kernel.kallsyms])
>>>>>> root@number:~#
>>>>>>
>>>>>> root@number:~# perf report --stdio -F +period | head -20
>>>>>> # To display the perf.data header info, please use --header/--header-only options.
>>>>>> #
>>>>>> #
>>>>>> # Total Lost Samples: 0
>>>>>> #
>>>>>> # Samples: 219K of events 'anon group { cpu_core/CPU_CLK_UNHALTED.THREAD/, cpu_core/L1D_PEND_MISS.FB_FULL/ }'
>>>>>> # Event count (approx.): 216528524863
>>>>>> #
>>>>>> # Overhead Period Command Shared Object Symbol
>>>>>> # ................ .................... ......... ................. ....................................
>>>>>> #
>>>>>> 4.01% 1.09% 8538169256 39826572 podman [kernel.kallsyms] [k] native_queued_spin_lock_slowpath
>>>>>> 1.35% 1.17% 2863376078 42829266 cc1plus cc1plus [.] 0x00000000003f6bcc
>>>>>> 0.94% 0.78% 1990639149 28408591 cc1plus cc1plus [.] 0x00000000003f6be4
>>>>>> 0.65% 0.17% 1375916283 6109515 podman [kernel.kallsyms] [k] _raw_spin_lock_irqsave
>>>>>> 0.61% 0.99% 1304418325 36198834 cc1plus [kernel.kallsyms] [k] get_mem_cgroup_from_mm
>>>>>> 0.52% 0.42% 1103054030 15427418 cc1plus cc1plus [.] 0x0000000000ca6c69
>>>>>> 0.51% 0.17% 1094200572 6299289 podman [kernel.kallsyms] [k] psi_group_change
>>>>>> 0.42% 0.41% 893633315 14778675 cc1plus cc1plus [.] 0x00000000018afafe
>>>>>> 0.42% 1.29% 887664793 47046952 cc1plus [kernel.kallsyms] [k] asm_exc_page_fault
>>>>>> root@number:~#
>>>>>>
>>>>>> That 'tma_fb_full' metric then would be another column, calculated from
>>>>>> the sampled components of its metric equation:
>>>>>>
>>>>>> root@number:~# perf list tma_fb_full | head
>>>>>>
>>>>>> Metric Groups:
>>>>>>
>>>>>> MemoryBW: [Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet]
>>>>>> tma_fb_full
>>>>>> [This metric does a *rough estimation* of how often L1D Fill Buffer
>>>>>> unavailability limited additional L1D miss memory access requests to
>>>>>> proceed]
>>>>>>
>>>>>> TopdownL4: [Metrics for top-down breakdown at level 4]
>>>>>> root@number:~#
>>>>>>
>>>>>> This is roughly what we brainstormed, to support metrics in other tools
>>>>>> than 'perf stat' but we need to check the possibilities and limitations
>>>>>> of such an idea, hopefully this discussion will help with that,
>>>>>
>>>>> Putting metrics next to code in perf report/annotate sounds good to
>>>>> me, opening all events from a metric as if we want to sample on them
>>>>> less so.
>>>>
>>>> The idea was to record whatever data was asked on record step and
>>>> provide the list of all metrics that can be calculated out of that data
>>>> in perf report, e.g. you could record tma_info_thread_ipc but report
>>>> will suggest both it and tma_info_thread_cpi.
>>>>
>>>
>>> Do you mean that sample all the events in a metrics, and report both
>>> samples and its metrics calculation result in the report?
>>> That doesn't work for all the metrics.
>>
>> IIRC Guilherme was mentioning having extra metrics on report was
>> something he missed that is available on tools such as VTune, Guilherme?
>
> Thanks for asking. I will try to explain the motivation behind metric
> sampling. VTune offers something called a Microarchitecture Analysis
> report, which will show a break down of all the TMA metrics per symbol:
>
> https://www.intel.com/content/www/us/en/docs/vtune-profiler/cookbook/2023-0/top-down-microarchitecture-analysis-method.html
>
> The link above has a small screenshot showing function, instructions,
> CPI, and the metrics. This is much better than just counting, because in
> a large detector simulation, for example, there are many different kinds
> of bottlenecks the code can have, and the break down per symbol helps to
> identify which functions suffer from bad speculation, which suffer from
> cache misses, etc. This allows one to choose what kind of change to make
> to the software to optimize it. So as a first step, having TMA level 0
> (i.e. a breakdown of the pipelines for Front-End Bound, Bad Speculation,
> Core Bound, and Memory Bound) would already be quite far towards the
> goal of understanding bottlenecks in specific parts of the code. VTune
> forces sampling without collecting call stacks for this, perf could do
> the same. Hotspots usually have lots of samples, which then allows
> computing metrics relatively accurately.
Yes, that's the assumption the VTune method relies on. Otherwise, the
result may be dubious.
> VTune uses multiplexing and
> very large sampling expression, which I am pasting at the end of this
> message². I extracted that command from the report file after using
> "vtune -collect uarch-exploration <command>" to produce a report. I
> tried that with standard perf record and it failed to parse, so likely
> amplxe-perf is required to be able to record that, but I thought it'd
> be useful information.
Actually, there is already a similar support for the perf script which
was provided by Andi.
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=4bd1bef8bba2f99ff472ae3617864dda301f81bd
It should be able to be extended to other tools, e.g., annotate or report.
But it seems the feature is broken now. It's better to fix it first.
$ sudo perf script -I -F cpu,ip,sym,event,period,metric
Segmentation fault
The solution relies on the sample read feature. So you probably have to
divide the metrics into several groups if the metrics is too big.
For the leading event, the ref-cycles suggested in Andi's example should
be a good default choice, after all you want to measure time.
For example, the "perf record -M tma_info_thread_ipc"
may be interpreted to
perf record -e "{ref-cycles,INST_RETIRED.ANY,CPU_CLK_UNHALTED.THREAD}:S"
The implementation should be simpler than the VTune method.
>
> As for the interface, I suggest adding a "perf mlist" similar to
> perf evlist, that would just print what metrics could be calculated
> from the events recorded in the input file. Then one could be selected
> for use with perf report or perf annotate.
>
The "perf mlist" looks good, since the metrics are used more widely.
Thanks,
Kan
> I hope this explains enough to clarify things for you. I am attaching a
> screenshot example for the classic matrix multiplication with wrong
> indexing as well, which shows that only certain lines get the metric,
> whereas lines with low number of samples just get 0.0%.
>
> Best regards,
> -Guilherme
>
>>> - For the topdown related metrics, especially on ICL and later
>>> platforms, the perf metrics feature is used by default. It doesn't
>>> support sampling.
>>> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/tools/perf/Documentation/topdown.txt?#n293
>>> - Some PMUs which doesn't support sampling as well, e.g., uncore, Power,
>>> MSR.
>>> - There are some SW events, e.g.,duration_time, you may don't want to do
>>> sampling
>>>
>>> You probable need to introduce a flag to ignore those metrics in perf
>>> record.
>>>
>>>>> We don't have metrics working with `perf stat record`, I
>>>>> think Kan may have volunteered for that, but it seems like something
>>>>> more urgent than expanding `perf record`. Presumably the way the
>>>>> metric would be recorded for that could also benefit this effort.
>>>>>
>>>>> If you look at the tma metrics a number of them have a "Sample with".
>>>>> For example:
>>>>> ```
>>>>> $ perf list -v
>>>>> ...
>>>>> tma_branch_mispredicts
>>>>> [This metric represents fraction of slots the CPU has wasted
>>>>> due to Branch Misprediction.
>>>>> These slots are either wasted by uops fetched from an
>>>>> incorrectly speculated program path;
>>>>> or stalls when the out-of-order part of the machine needs to
>>>>> recover its state from a
>>>>> speculative path. Sample with: BR_MISP_RETIRED.ALL_BRANCHES.
>>>>> Related metrics:
>>>>> tma_info_bad_spec_branch_misprediction_cost,tma_info_bottleneck_mispredictions,
>>>>> tma_mispredicts_resteers]
>>>>> ...
>>>>> ```
>>>>> It could be logical for `perf record -M tma_branch_mispredicts ...` to
>>>>> be translated to `perf record -e BR_MISP_RETIRED.ALL_BRANCHES ...`
>>>>> rather than to do any form of counting.
>>>>
>>>> Thanks for the pointer, I'll see how this could be done.
>>>
>>> It sounds more reasonable to me that we can sample some typical events,
>>> and read the other members in the metrics. So we can put metrics next to
>>> the code in perf report/annotate as Ian mentioned. It could also address
>>> limits of some metrics, especially for the topdown related metrics.
>>> (But I'm not sure if the "Sample with" can give you the right hints. I
>>> will ask around internally.)
>>>
>>> But there is also some limits for the sampling read. Everything has to
>>> be in a group. That could be a problem for some big metrics.
>>> Thanks,
>>> Kan
>
> 2. runCmd: amplxe-perf record -v --control=fd:21,24 -o system-wide.perf -N -B -T --sample-cpu -d -a --compression-level=1 --threads --clockid=CLOCK_MONOTONIC_RAW -e cpu/period=0xa037a0,event=0x3c,name='CPU_CLK_UNHALTED.THREAD'/Duk,cpu/period=0xa037a0,umask=0x3,name='CPU_CLK_UNHALTED.REF_TSC'/Duk,cpu/period=0xa037a0,event=0xc0,name='INST_RETIRED.ANY'/Duk,cpu/period=0x7a12f,event=0x3c,umask=0x1,name='CPU_CLK_UNHALTED.REF_XCLK'/uk,cpu/period=0x7a12f,event=0x3c,umask=0x2,name='CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE'/uk,cpu/period=0x98968f,event=0x3c,name='CPU_CLK_UNHALTED.THREAD_P'/uk,cpu/period=0x98968f,event=0xa3,umask=0x14,cmask=0x14,name='CYCLE_ACTIVITY.STALLS_MEM_ANY'/uk,cpu/period=0x98968f,event=0xa3,umask=0x4,cmask=0x4,name='CYCLE_ACTIVITY.STALLS_TOTAL'/uk,cpu/period=0x98968f,event=0xa6,umask=0x2,name='EXE_ACTIVITY.1_PORTS_UTIL'/uk,cpu/period=0x98968f,event=0xa6,umask=0x4,name='EXE_ACTIVITY.2_PORTS_UTIL'/uk,cpu/period=0x98968f,event=0xa6,umask=0x40,name='EXE_ACTIVITY.BOUND_ON_STORES'/uk,cpu/period=0x7a143,event=0xc6,umask=0x1,frontend=0x400406,name='FRONTEND_RETIRED.LATENCY_GE_4_PS'/ukpp,cpu/period=0x98968f,event=0x9c,umask=0x1,name='IDQ_UOPS_NOT_DELIVERED.CORE'/uk,cpu/period=0x98968f,event=0xd,umask=0x1,name='INT_MISC.RECOVERY_CYCLES'/uk,cpu/period=0x98968f,event=0xe,umask=0x1,name='UOPS_ISSUED.ANY'/uk,cpu/period=0x98968f,event=0xc2,umask=0x2,name='UOPS_RETIRED.RETIRE_SLOTS'/uk,cpu/period=0x7a12f,event=0xe6,umask=0x1,name='BACLEARS.ANY'/uk,cpu/period=0x1e84ad,event=0xc5,name='BR_MISP_RETIRED.ALL_BRANCHES'/uk,cpu/period=0x98968f,event=0xab,umask=0x2,name='DSB2MITE_SWITCHES.PENALTY_CYCLES'/uk,cpu/period=0x7a143,event=0xc6,umask=0x1,frontend=0x1,name='FRONTEND_RETIRED.ANY_DSB_MISS'/uk,cpu/period=0x7a143,event=0xc6,umask=0x1,frontend=0x11,name='FRONTEND_RETIRED.DSB_MISS_PS'/ukpp,cpu/period=0x7a143,event=0xc6,umask=0x1,frontend=0x13,name='FRONTEND_RETIRED.L2_MISS_PS'/ukpp,cpu/period=0x7a143,event=0xc6,umask=0x1,frontend=0x401006,name='FRONTEND_RETIRED.LATENCY_GE_16_PS'/ukpp,cpu/period=0x7a143,event=0xc6,umask=0x1,frontend=0x100206,name='FRONTEND_RETIRED.LATENCY_GE_2_BUBBLES_GE_1_PS'/ukpp,cpu/period=0x7a143,event=0xc6,umask=0x1,frontend=0x15,name='FRONTEND_RETIRED.STLB_MISS_PS'/ukpp,cpu/period=0x98968f,event=0x80,umask=0x4,name='ICACHE_16B.IFDATA_STALL'/uk,cpu/period=0x98968f,event=0x80,edge=0x1,umask=0x4,cmask=0x1,name='ICACHE_16B.IFDATA_STALL:cmask=1:e=yes'/uk,cpu/period=0xf424f,event=0x83,umask=0x4,name='ICACHE_64B.IFTAG_STALL'/uk,cpu/period=0x98968f,event=0x79,umask=0x18,cmask=0x4,name='IDQ.ALL_DSB_CYCLES_4_UOPS'/uk,cpu/period=0x98968f,event=0x79,umask=0x18,cmask=0x1,name='IDQ.ALL_DSB_CYCLES_ANY_UOPS'/uk,cpu/period=0x98968f,event=0x79,umask=0x24,cmask=0x4,name='IDQ.ALL_MITE_CYCLES_4_UOPS'/uk,cpu/period=0x98968f,event=0x79,umask=0x24,cmask=0x1,name='IDQ.ALL_MITE_CYCLES_ANY_UOPS'/uk,cpu/period=0x98968f,event=0x79,umask=0x8,name='IDQ.DSB_UOPS'/uk,cpu/period=0x98968f,event=0x79,umask=0x4,name='IDQ.MITE_UOPS'/uk,cpu/period=0x98968f,event=0x79,edge=0x1,umask=0x30,cmask=0x1,name='IDQ.MS_SWITCHES'/uk,cpu/period=0x98968f,event=0x79,umask=0x30,name='IDQ.MS_UOPS'/uk,cpu/period=0x98968f,event=0x9c,umask=0x1,cmask=0x4,name='IDQ_UOPS_NOT_DELIVERED.CYCLES_0_UOPS_DELIV.CORE'/uk,cpu/period=0x98968f,event=0x87,umask=0x1,name='ILD_STALL.LCP'/uk,cpu/period=0x98968f,event=0x55,umask=0x1,cmask=0x1,name='INST_DECODED.DECODERS:cmask=1'/uk,cpu/period=0x98968f,event=0x55,umask=0x1,cmask=0x2,name='INST_DECODED.DECODERS:cmask=2'/uk,cpu/period=0x98968f,event=0xd,umask=0x80,name='INT_MISC.CLEAR_RESTEER_CYCLES'/uk,cpu/period=0x7a12f,event=0xc3,edge=0x1,umask=0x1,cmask=0x1,name='MACHINE_CLEARS.COUNT'/uk,cpu/period=0x1e84ad,event=0xc5,umask=0x4,name='BR_MISP_RETIRED.ALL_BRANCHES_PS'/ukpp,cpu/period=0x98968f,event=0xa3,umask=0x8,cmask=0x8,name='CYCLE_ACTIVITY.CYCLES_L1D_MISS'/uk,cpu/period=0x98968f,event=0xa3,umask=0x10,cmask=0x10,name='CYCLE_ACTIVITY.CYCLES_MEM_ANY'/uk,cpu/period=0x98968f,event=0xa3,umask=0xc,cmask=0xc,name='CYCLE_ACTIVITY.STALLS_L1D_MISS'/uk,cpu/period=0x98968f,event=0xa3,umask=0x5,cmask=0x5,name='CYCLE_ACTIVITY.STALLS_L2_MISS'/uk,cpu/period=0x98968f,event=0xa3,umask=0x6,cmask=0x6,name='CYCLE_ACTIVITY.STALLS_L3_MISS'/uk,cpu/period=0x98968f,event=0x8,umask=0x20,cmask=0x1,name='DTLB_LOAD_MISSES.STLB_HIT:cmask=1'/uk,cpu/period=0x7a12f,event=0x8,umask=0x10,cmask=0x1,name='DTLB_LOAD_MISSES.WALK_ACTIVE'/uk,cpu/period=0x7a12f,event=0x49,umask=0x20,cmask=0x1,name='DTLB_STORE_MISSES.STLB_HIT:cmask=1'/uk,cpu/period=0x7a12f,event=0x49,umask=0x10,cmask=0x1,name='DTLB_STORE_MISSES.WALK_ACTIVE'/uk,cpu/period=0x98968f,event=0x48,umask=0x2,cmask=0x1,name='L1D_PEND_MISS.FB_FULL:cmask=1'/uk,cpu/period=0x98968f,event=0x48,umask=0x1,name='L1D_PEND_MISS.PENDING'/uk,cpu/period=0xf424f,event=0x24,umask=0xe2,name='L2_RQSTS.ALL_RFO'/uk,cpu/period=0xf424f,event=0x24,umask=0xc2,name='L2_RQSTS.RFO_HIT'/uk,cpu/period=0x7a12f,event=0x3,umask=0x8,name='LD_BLOCKS.NO_SR'/uk,cpu/period=0x7a12f,event=0x3,umask=0x2,name='LD_BLOCKS.STORE_FORWARD'/uk,cpu/period=0x7a12f,event=0x7,umask=0x1,name='LD_BLOCKS_PARTIAL.ADDRESS_ALIAS'/uk,cpu/period=0x98968f,event=0xd0,umask=0x82,name='MEM_INST_RETIRED.ALL_STORES_PS'/ukpp,cpu/period=0x7a143,event=0xd0,umask=0x21,name='MEM_INST_RETIRED.LOCK_LOADS_PS'/ukpp,cpu/period=0x7a12f,event=0xd0,umask=0x41,name='MEM_INST_RETIRED.SPLIT_LOADS_PS'/ukpp,cpu/period=0x7a12f,event=0xd0,umask=0x42,name='MEM_INST_RETIRED.SPLIT_STORES_PS'/ukpp,cpu/period=0x7a12f,event=0xd0,umask=0x11,name='MEM_INST_RETIRED.STLB_MISS_LOADS_PS'/ukpp,cpu/period=0x7a12f,event=0xd0,umask=0x12,name='MEM_INST_RETIRED.STLB_MISS_STORES_PS'/ukpp,cpu/period=0x186d7,event=0xd2,umask=0x4,name='MEM_LOAD_L3_HIT_RETIRED.XSNP_HITM_PS'/ukpp,cpu/period=0x186d7,event=0xd2,umask=0x2,name='MEM_LOAD_L3_HIT_RETIRED.XSNP_HIT_PS'/ukpp,cpu/period=0x186d7,event=0xd2,umask=0x1,name='MEM_LOAD_L3_HIT_RETIRED.XSNP_MISS_PS'/ukpp,cpu/period=0x7a143,event=0xd3,umask=0x1,name='MEM_LOAD_L3_MISS_RETIRED.LOCAL_DRAM_PS'/ukpp,cpu/period=0x7a143,event=0xd3,umask=0x2,name='MEM_LOAD_L3_MISS_RETIRED.REMOTE_DRAM_PS'/ukpp,cpu/period=0x7a143,event=0xd3,umask=0x8,name='MEM_LOAD_L3_MISS_RETIRED.REMOTE_FWD'/uk,cpu/period=0x7a143,event=0xd3,umask=0x4,name='MEM_LOAD_L3_MISS_RETIRED.REMOTE_HITM_PS'/ukpp,cpu/period=0x7a143,event=0xd3,umask=0x10,name='MEM_LOAD_L3_MISS_RETIRED.REMOTE_PMM_PS'/ukpp,cpu/period=0x7a12f,event=0xd1,umask=0x40,name='MEM_LOAD_RETIRED.FB_HIT_PS'/ukpp,cpu/period=0x98968f,event=0xd1,umask=0x1,name='MEM_LOAD_RETIRED.L1_HIT_PS'/ukpp,cpu/period=0x7a12f,event=0xd1,umask=0x8,name='MEM_LOAD_RETIRED.L1_MISS_PS'/ukpp,cpu/period=0x7a12f,event=0xd1,umask=0x2,name='MEM_LOAD_RETIRED.L2_HIT_PS'/ukpp,cpu/period=0x3d0f9,event=0xd1,umask=0x4,name='MEM_LOAD_RETIRED.L3_HIT_PS'/ukpp,cpu/period=0x7a143,event=0xd1,umask=0x20,name='MEM_LOAD_RETIRED.L3_MISS_PS'/ukpp,cpu/period=0x7a143,event=0xd1,umask=0x80,name='MEM_LOAD_RETIRED.LOCAL_PMM_PS'/ukpp,cpu/period=0x98968f,event=0xb2,umask=0x1,name='OFFCORE_REQUESTS_BUFFER.SQ_FULL'/uk,cpu/period=0x98968f,event=0x60,umask=0x8,cmask=0x4,name='OFFCORE_REQUESTS_OUTSTANDING.ALL_DATA_RD:cmask=4'/uk,cpu/period=0x98968f,event=0x60,umask=0x8,cmask=0x1,name='OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DATA_RD'/uk,cpu/period=0x98968f,event=0x60,umask=0x4,cmask=0x1,name='OFFCORE_REQUESTS_OUTSTANDING.CYCLES_WITH_DEMAND_RFO'/uk,cpu/period=0x98968f,event=0x14,umask=0x1,cmask=0x1,name='ARITH.DIVIDER_ACTIVE'/uk,cpu/period=0x98968f,event=0xa6,umask=0x1,name='EXE_ACTIVITY.EXE_BOUND_0_PORTS'/uk,cpu/period=0x98968f,event=0xc7,name='FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE:umask=0xfc'/uk,cpu/period=0x98968f,event=0xc7,name='FP_ARITH_INST_RETIRED.SCALAR_SINGLE:umask=0x03'/uk,cpu/period=0x98968f,event=0x59,umask=0x1,name='PARTIAL_RAT_STALLS.SCOREBOARD'/uk,cpu/period=0x98968f,event=0xc0,umask=0x1,name='INST_RETIRED.PREC_DIST'/ukpp,cpu/period=0x98968f,event=0xcc,umask=0x40,name='ROB_MISC_EVENTS.PAUSE_INST'/uk,cpu/period=0x98968f,event=0xa1,umask=0x1,name='UOPS_DISPATCHED_PORT.PORT_0'/uk,cpu/period=0x98968f,event=0xa1,umask=0x2,name='UOPS_DISPATCHED_PORT.PORT_1'/uk,cpu/period=0x98968f,event=0xa1,umask=0x4,name='UOPS_DISPATCHED_PORT.PORT_2'/uk,cpu/period=0x98968f,event=0xa1,umask=0x8,name='UOPS_DISPATCHED_PORT.PORT_3'/uk,cpu/period=0x98968f,event=0xa1,umask=0x10,name='UOPS_DISPATCHED_PORT.PORT_4'/uk,cpu/period=0x98968f,event=0xa1,umask=0x20,name='UOPS_DISPATCHED_PORT.PORT_5'/uk,cpu/period=0x98968f,event=0xa1,umask=0x40,name='UOPS_DISPATCHED_PORT.PORT_6'/uk,cpu/period=0x98968f,event=0xa1,umask=0x80,name='UOPS_DISPATCHED_PORT.PORT_7'/uk,cpu/period=0x98968f,event=0xb1,umask=0x2,cmask=0x1,name='UOPS_EXECUTED.CORE_CYCLES_GE_1'/uk,cpu/period=0x98968f,event=0xb1,umask=0x2,cmask=0x2,name='UOPS_EXECUTED.CORE_CYCLES_GE_2'/uk,cpu/period=0x98968f,event=0xb1,umask=0x2,cmask=0x3,name='UOPS_EXECUTED.CORE_CYCLES_GE_3'/uk,cpu/period=0x98968f,event=0xb1,inv=0x1,umask=0x2,cmask=0x1,name='UOPS_EXECUTED.CORE_CYCLES_NONE'/uk,cpu/period=0x98968f,event=0xb1,umask=0x1,name='UOPS_EXECUTED.THREAD'/uk,cpu/period=0x98968f,event=0xb1,umask=0x10,name='UOPS_EXECUTED.X87'/uk,cpu/period=0x98968f,event=0xe,umask=0x2,name='UOPS_ISSUED.VECTOR_WIDTH_MISMATCH'/uk,cpu/period=0x98968f,event=0xc2,umask=0x4,name='UOPS_RETIRED.MACRO_FUSED'/uk,cpu/period=0x1e84ad,event=0xc4,name='BR_INST_RETIRED.ALL_BRANCHES'/uk,cpu/period=0x98968f,event=0xc7,umask=0x4,name='FP_ARITH_INST_RETIRED.128B_PACKED_DOUBLE'/uk,cpu/period=0x98968f,event=0xc7,umask=0x8,name='FP_ARITH_INST_RETIRED.128B_PACKED_SINGLE'/uk,cpu/period=0x98968f,event=0xc7,umask=0x10,name='FP_ARITH_INST_RETIRED.256B_PACKED_DOUBLE'/uk,cpu/period=0x98968f,event=0xc7,umask=0x20,name='FP_ARITH_INST_RETIRED.256B_PACKED_SINGLE'/uk,cpu/period=0x98968f,event=0xc7,umask=0x40,name='FP_ARITH_INST_RETIRED.512B_PACKED_DOUBLE'/uk,cpu/period=0x98968f,event=0xc7,umask=0x80,name='FP_ARITH_INST_RETIRED.512B_PACKED_SINGLE'/uk,cpu/period=0x7a12f,event=0xca,umask=0x1e,cmask=0x1,name='FP_ASSIST.ANY'/uk,cpu/period=0x98968f,event=0xc0,umask=0x2,name='INST_RETIRED.NOP'/uk,cpu/period=0x98968f,event=0xd0,umask=0x83,name='MEM_INST_RETIRED.ANY'/uk,cpu/period=0x7a12f,event=0xc1,umask=0x3f,name='OTHER_ASSISTS.ANY'/uk,cpu/period=0x7a12f,event=0xb7,offcore_rsp=0x8003c0001,umask=0x1,name='OCR.DEMAND_DATA_RD.L3_HIT.HIT_OTHER_CORE_FWD'/uk,cpu/period=0x7a12f,event=0xbb,offcore_rsp=0x10003c0002,umask=0x1,name='OCR.DEMAND_RFO.L3_HIT.HITM_OTHER_CORE'/uk,cpu/period=0x7a12f,event=0xb7,offcore_rsp=0x103fc00020,umask=0x1,name='OCR.PF_L2_RFO.L3_MISS.REMOTE_HITM'/uk,cpu/period=0x7a12f,event=0xbb,offcore_rsp=0x10003c0001,umask=0x1,name='OCR.DEMAND_DATA_RD.L3_HIT.HITM_OTHER_CORE'/uk,cpu/period=0x98968f,event=0xc7,umask=0x2,name='FP_ARITH_INST_RETIRED.SCALAR_SINGLE'/uk,cpu/period=0x98968f,event=0xc7,umask=0x1,name='FP_ARITH_INST_RETIRED.SCALAR_DOUBLE'/uk,cpu/period=0x7a12f,event=0xb7,offcore_rsp=0x103fc00002,umask=0x1,name='OCR.DEMAND_RFO.L3_MISS.REMOTE_HITM'/uk,cpu/period=0x7a12f,event=0xbb,offcore_rsp=0x10003c0020,umask=0x1,name='OCR.PF_L2_RFO.L3_HIT.HITM_OTHER_CORE'/uk amplxe-perf-sync sync sys