Re: [PATCH v3 1/n] perf/core: addressing 4x slowdown during per-process profiling of STREAM benchmark on Intel Xeon Phi

From: Alexey Budankov
Date: Fri Jun 16 2017 - 10:22:41 EST


On 16.06.2017 17:08, Alexey Budankov wrote:
On 16.06.2017 12:09, Mark Rutland wrote:
On Fri, Jun 16, 2017 at 01:10:10AM +0300, Alexey Budankov wrote:
On 15.06.2017 22:56, Mark Rutland wrote:
On Thu, Jun 15, 2017 at 08:41:42PM +0300, Alexey Budankov wrote:
This series of patches continues v2 and addresses captured comments.

Specifically this patch replaces pinned_groups and flexible_groups
lists of perf_event_context by red-black cpu indexed trees avoiding
data structures duplication and introducing possibility to iterate
event groups for a specific CPU only.

If you use --per-thread, I take it the overhead is significantly
lowered?

Please ask more.

IIUC, you're seeing the slowdown when using perf record, correct?

Correct. Specifically in per-process mode - without -a option.


There's a --per-thread option to ask perf record to not duplicate the
event per-cpu.

If you use that, what amount of slowdown do you see?

After applying all three patches:

- system-wide collection:

[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 303.795 MB perf.data (~13272985 samples) ]
2162.08user 176.24system 0:12.97elapsed 18021%CPU (0avgtext+0avgdata 1187208maxresident)k
0inputs+622624outputs (0major+1360285minor)pagefaults 0swaps

- per-process collection:

[ perf record: Woken up 5 times to write data ]
[ perf record: Captured and wrote 1.079 MB perf.data (~47134 samples) ]
2102.39user 153.88system 0:12.78elapsed 17645%CPU (0avgtext+0avgdata 1187156maxresident)k
0inputs+2272outputs (0major+1181660minor)pagefaults 0swaps

Elapsed times look similar. Data file sizes differ significantly.

Test script:

#!/bin/bash

echo 0 > /proc/sys/kernel/watchdog
echo 0 > /proc/sys/kernel/perf_event_paranoid
/usr/bin/time "/root/abudanko/vtune_amplifier_2018_zip/bin64/amplxe-perf" record --per-thread [-a] -N -B -T -R -d -e cpu/period=0x155cc0,pc=0x0,any=0x0,inv=0x0,edge=0x0,cmask=0x0,event=0x3c,in_tx=0x0,ldlat=0x0,umask=0x0,in_tx_cp=0x0,offcore_rsp=0x0/Duk,cpu/period=0x155cc0,pc=0x0,any=0x0,inv=0x0,edge=0x0,cmask=0x0,event=0x0,in_tx=0x0,ldlat=0x0,umask=0x3,in_tx_cp=0x0,offcore_rsp=0x0/Duk,cpu/period=0x155cc0,pc=0x0,any=0x0,inv=0x0,edge=0x0,cmask=0x0,event=0xc0,in_tx=0x0,ldlat=0x0,umask=0x0,in_tx_cp=0x0,offcore_rsp=0x0/Duk,cpu/period=0x30d43,pc=0x0,any=0x0,inv=0x0,edge=0x0,cmask=0x0,event=0x3,in_tx=0x0,ldlat=0x0,umask=0x8,in_tx_cp=0x0,offcore_rsp=0x0/ukpp,cpu/period=0x30d43,pc=0x0,any=0x0,inv=0x0,edge=0x0,cmask=0x0,event=0x3,in_tx=0x0,ldlat=0x0,umask=0x1,in_tx_cp=0x0,offcore_rsp=0x0/ukpp,cpu/period=0x30d43,pc=0x0,any=0x0,inv=0x0,edge=0x0,cmask=0x0,event=0x4,in_tx=0x0,ldlat=0x0,umask=0x2,in_tx_cp=0x0,offcore_rsp=0x0/ukpp,cpu/period=0x186a7,pc=0x0,any=0x0,inv=0x0,edge=0x0,cmask=0x0,event=0x4,in_tx=0x0,ldlat=0x0,umask=0x4,in_tx_cp=0x0,offcore_rsp=0x0/ukpp,cpu/period=0x1e8483,pc=0x0,any=0x0,inv=0x0,edge=0x0,cmask=0x0,event=0x3c,in_tx=0x0,ldlat=0x0,umask=0x0,in_tx_cp=0x0,offcore_rsp=0x0/uk,cpu/period=0x1e8483,pc=0x0,any=0x0,inv=0x0,edge=0x0,cmask=0x0,event=0xc2,in_tx=0x0,ldlat=0x0,umask=0x10,in_tx_cp=0x0,offcore_rsp=0x0/uk,cpu/period=0x30d43,pc=0x0,any=0x0,inv=0x0,edge=0x0,cmask=0x0,event=0xca,in_tx=0x0,ldlat=0x0,umask=0x4,in_tx_cp=0x0,offcore_rsp=0x0/uk,cpu/period=0x30d43,pc=0x0,any=0x0,inv=0x0,edge=0x0,cmask=0x0,event=0xca,in_tx=0x0,ldlat=0x0,umask=0x90,in_tx_cp=0x0,offcore_rsp=0x0/uk,cpu/period=0x1e8483,pc=0x0,any=0x0,inv=0x0,edge=0x0,cmask=0x0,event=0xc2,in_tx=0x0,ldlat=0x0,umask=0x1,in_tx_cp=0x0,offcore_rsp=0x0/uk,cpu/period=0x30d43,pc=0x0,any=0x0,inv=0x0,edge=0x0,cmask=0x0,event=0xc3,in_tx=0x0,ldlat=0x0,umask=0x4,in_tx_cp=0x0,offcore_rsp=0x0/uk,cpu/period=0x30d43,pc=0x0,any=0x0,inv=0x0,edge=0x0,cmask=0x0,event=0x4,in_tx=0x0,ldlat=0x0,umask=0x20,in_tx_cp=0x0,offcore_rsp=0x0/uk,cpu/period=0x30d43,pc=0x0,any=0x0,inv=0x0,edge=0x0,cmask=0x0,event=0x5,in_tx=0x0,ldlat=0x0,umask=0x3,in_tx_cp=0x0,offcore_rsp=0x0/uk,cpu/period=0x1e8483,pc=0x0,any=0x0,inv=0x0,edge=0x0,cmask=0x0,event=0xcd,in_tx=0x0,ldlat=0x0,umask=0x1,in_tx_cp=0x0,offcore_rsp=0x0/uk,cpu/period=0x30d43,pc=0x0,any=0x0,inv=0x0,edge=0x0,cmask=0x0,event=0x3,in_tx=0x0,ldlat=0x0,umask=0x4,in_tx_cp=0x0,offcore_rsp=0x0/uk,cpu/period=0x30d43,pc=0x0,any=0x0,inv=0x0,edge=0x0,cmask=0x0,event=0x86,in_tx=0x0,ldlat=0x0,umask=0x4,in_tx_cp=0x0,offcore_rsp=0x0/uk,cpu/period=0x30d43,pc=0x0,any=0x0,inv=0x0,edge=0x0,cmask=0x0,event=0x4,in_tx=0x0,ldlat=0x0,umask=0x10,in_tx_cp=0x0,offcore_rsp=0x0/uk,cpu/period=0x30d43,pc=0x0,any=0x0,inv=0x0,edge=0x0,cmask=0x0,event=0x4,in_tx=0x0,ldlat=0x0,umask=0x40,in_tx_cp=0x0,offcore_rsp=0x0/uk,cpu/period=0x30d43,pc=0x0,any=0x0,inv=0x0,edge=0x0,cmask=0x0,event=0x4,in_tx=0x0,ldlat=0x0,umask=0x80,in_tx_cp=0x0,offcore_rsp=0x0/uk,cpu/period=0x30d43,pc=0x0,any=0x0,inv=0x0,edge=0x0,cmask=0x0,event=0xc2,in_tx=0x0,ldlat=0x0,umask=0x40,in_tx_cp=0x0,offcore_rsp=0x0/uk,cpu/period=0x30d43,pc=0x0,any=0x0,inv=0x0,edge=0x0,cmask=0x0,event=0xc2,in_tx=0x0,ldlat=0x0,umask=0x20,in_tx_cp=0x0,offcore_rsp=0x0/uk,cpu/period=0x30d43,pc=0x0,any=0x0,inv=0x0,edge=0x0,cmask=0x0,event=0x5,in_tx=0x0,ldlat=0x0,umask=0x2,in_tx_cp=0x0,offcore_rsp=0x0/uk,cpu/period=0x30d43,pc=0x0,any=0x0,inv=0x0,edge=0x0,cmask=0x0,event=0xe6,in_tx=0x0,ldlat=0x0,umask=0x1,in_tx_cp=0x0,offcore_rsp=0x0/uk,cpu/period=0x30d43,pc=0x0,any=0x0,inv=0x0,edge=0x0,cmask=0x0,event=0xe7,in_tx=0x0,ldlat=0x0,umask=0x1,in_tx_cp=0x0,offcore_rsp=0x0/uk,cpu/period=0x30d43,pc=0x0,any=0x0,inv=0x0,edge=0x0,cmask=0x0,event=0xc3,in_tx=0x0,ldlat=0x0,umask=0x1,in_tx_cp=0x0,offcore_rsp=0x0/uk,cpu/period=0x30d43,pc=0x0,any=0x0,inv=0x0,edge=0x0,cmask=0x0,event=0xc3,in_tx=0x0,ldlat=0x0,umask=0x2,in_tx_cp=0x0,offcore_rsp=0x0/uk,cpu/period=0x30d43,pc=0x0,any=0x0,inv=0x0,edge=0x0,cmask=0x0,event=0x4,in_tx=0x0,ldlat=0x0,umask=0x1,in_tx_cp=0x0,offcore_rsp=0x0/uk -- ./stream


It might be preferable to not open task-bound per-cpu events on systems
with large cpu counts, and it would be good to know what the trade-off
looks like for this case.

+static void
+perf_cpu_tree_insert(struct rb_root *tree, struct perf_event *event)
+{
+ struct rb_node **node;
+ struct rb_node *parent;
+
+ WARN_ON_ONCE(!tree || !event);
+
+ node = &tree->rb_node;
+ parent = *node;

The first iteration of the loop handles this, so it can go.

If tree is empty parent will be uninitialized what is harmful.

Sorry; my bad.

Thanks,
Mark.