Re: [LKP] [/proc/stat] 3047027b34: reaim.jobs_per_min -4.8% regression

From: Kees Cook
Date: Mon Jan 21 2019 - 15:03:14 EST


On Fri, Jan 18, 2019 at 9:44 PM kernel test robot <rong.a.chen@xxxxxxxxx> wrote:
>
> Greeting,
>
> FYI, we noticed a -4.8% regression of reaim.jobs_per_min due to commit:
>
>
> commit: 3047027b34b8c6404b509903058b89836093acc7 ("[PATCH 2/2] /proc/stat: Add sysctl parameter to control irq counts latency")
> url: https://github.com/0day-ci/linux/commits/Waiman-Long/proc-stat-Reduce-irqs-counting-performance-overhead/20190108-104818

Is this expected? (And it seems like other things in the report below
are faster? I don't understand why this particular regression was
called out?)

-Kees

>
>
> in testcase: reaim
> on test machine: 56 threads Intel(R) Xeon(R) CPU E5-2695 v3 @ 2.30GHz with 256G memory
> with following parameters:
>
> runtime: 300s
> nr_task: 5000
> test: shared_memory
> cpufreq_governor: performance
> ucode: 0x3d
>
> test-description: REAIM is an updated and improved version of AIM 7 benchmark.
> test-url: https://sourceforge.net/projects/re-aim-7/
>
>
>
> Details are as below:
> -------------------------------------------------------------------------------------------------->
>
>
> To reproduce:
>
> git clone https://github.com/intel/lkp-tests.git
> cd lkp-tests
> bin/lkp install job.yaml # job file is attached in this email
> bin/lkp run job.yaml
>
> =========================================================================================
> compiler/cpufreq_governor/kconfig/nr_task/rootfs/runtime/tbox_group/test/testcase/ucode:
> gcc-7/performance/x86_64-rhel-7.2/5000/debian-x86_64-2018-04-03.cgz/300s/lkp-hsw-ep5/shared_memory/reaim/0x3d
>
> commit:
> 51e8bce392 ("/proc/stat: Extract irqs counting code into show_stat_irqs()")
> 3047027b34 ("/proc/stat: Add sysctl parameter to control irq counts latency")
>
> 51e8bce392dd2cc9 3047027b34b8c6404b50990305
> ---------------- --------------------------
> fail:runs %reproduction fail:runs
> | | |
> 1:4 -25% :4 kmsg.igb#:#:#:exceed_max#second
> %stddev %change %stddev
> \ | \
> 101.96 +7.5% 109.60 reaim.child_systime
> 33.32 -1.8% 32.73 reaim.child_utime
> 5534451 -4.8% 5271308 reaim.jobs_per_min
> 1106 -4.8% 1054 reaim.jobs_per_min_child
> 5800927 -4.9% 5517884 reaim.max_jobs_per_min
> 5.42 +5.0% 5.69 reaim.parent_time
> 1.51 +5.3% 1.59 reaim.std_dev_time
> 29374932 -2.8% 28558608 reaim.time.minor_page_faults
> 1681 +1.6% 1708 reaim.time.percent_of_cpu_this_job_got
> 3841 +4.5% 4012 reaim.time.system_time
> 1234 -4.4% 1180 reaim.time.user_time
> 18500000 -2.7% 18000000 reaim.workload
> 5495296 Â 9% -9.5% 4970496 meminfo.DirectMap2M
> 5142 Â 18% -43.2% 2920 Â 46% numa-vmstat.node0.nr_shmem
> 29.00 Â 32% +56.9% 45.50 Â 10% vmstat.procs.r
> 67175 Â 37% +66.6% 111910 Â 20% numa-meminfo.node0.AnonHugePages
> 20591 Â 18% -43.2% 11691 Â 46% numa-meminfo.node0.Shmem
> 64688 Â 6% -36.8% 40906 Â 19% slabinfo.kmalloc-8.active_objs
> 64691 Â 6% -36.8% 40908 Â 19% slabinfo.kmalloc-8.num_objs
> 37.36 Â 7% +11.1% 41.53 Â 4% boot-time.boot
> 29.15 Â 6% +14.3% 33.31 Â 3% boot-time.dhcp
> 847.73 Â 9% +12.9% 957.09 Â 4% boot-time.idle
> 202.50 Â100% +101.7% 408.50 proc-vmstat.nr_mlock
> 8018 Â 9% -12.3% 7034 Â 2% proc-vmstat.nr_shmem
> 29175944 -2.8% 28369676 proc-vmstat.numa_hit
> 29170351 -2.8% 28364111 proc-vmstat.numa_local
> 5439 Â 5% -18.7% 4423 Â 7% proc-vmstat.pgactivate
> 30220220 -2.8% 29374906 proc-vmstat.pgalloc_normal
> 30182224 -2.7% 29368266 proc-vmstat.pgfault
> 30186671 -2.8% 29341792 proc-vmstat.pgfree
> 69510 Â 12% -34.2% 45759 Â 33% sched_debug.cfs_rq:/.load.avg
> 30.21 Â 24% -33.6% 20.05 Â 20% sched_debug.cfs_rq:/.runnable_load_avg.avg
> 66447 Â 12% -37.6% 41460 Â 37% sched_debug.cfs_rq:/.runnable_weight.avg
> 12.35 Â 4% +88.0% 23.22 Â 15% sched_debug.cpu.clock.stddev
> 12.35 Â 4% +88.0% 23.22 Â 15% sched_debug.cpu.clock_task.stddev
> 30.06 Â 12% -26.5% 22.10 Â 13% sched_debug.cpu.cpu_load[0].avg
> 29.37 Â 9% -22.6% 22.72 Â 13% sched_debug.cpu.cpu_load[1].avg
> 28.71 Â 6% -21.1% 22.66 Â 16% sched_debug.cpu.cpu_load[2].avg
> 17985 -12.0% 15823 Â 2% sched_debug.cpu.curr->pid.max
> 67478 Â 6% -32.5% 45531 Â 24% sched_debug.cpu.load.avg
> 10369 Â 49% -100.0% 0.00 sched_debug.cpu.load.min
> 0.21 Â 34% -100.0% 0.00 sched_debug.cpu.nr_running.min
> 12.98 -16.3% 10.86 Â 11% perf-stat.i.MPKI
> 5.712e+09 -3.8% 5.492e+09 perf-stat.i.branch-instructions
> 1.024e+08 -3.7% 98557208 perf-stat.i.branch-misses
> 8.17 +0.4 8.58 Â 2% perf-stat.i.cache-miss-rate%
> 7839589 +10.5% 8659798 perf-stat.i.cache-misses
> 86324420 +3.8% 89595898 Â 2% perf-stat.i.cache-references
> 1.55 Â 2% -4.0% 1.49 perf-stat.i.cpi
> 2290 -1.9% 2246 perf-stat.i.cpu-migrations
> 4667 -10.9% 4160 perf-stat.i.cycles-between-cache-misses
> 8.749e+09 -3.9% 8.409e+09 perf-stat.i.dTLB-loads
> 527660 Â 3% -15.0% 448539 Â 5% perf-stat.i.dTLB-store-misses
> 5.747e+09 -4.3% 5.499e+09 perf-stat.i.dTLB-stores
> 53047071 -3.5% 51190942 perf-stat.i.iTLB-load-misses
> 20576112 -13.7% 17759009 Â 2% perf-stat.i.iTLB-loads
> 3.207e+10 -3.9% 3.083e+10 perf-stat.i.instructions
> 0.77 -2.3% 0.75 perf-stat.i.ipc
> 99933 -3.8% 96127 perf-stat.i.minor-faults
> 4325719 +5.6% 4568226 perf-stat.i.node-load-misses
> 52.39 -2.0 50.36 perf-stat.i.node-store-miss-rate%
> 1411700 +20.9% 1706321 perf-stat.i.node-store-misses
> 883790 +34.1% 1184836 perf-stat.i.node-stores
> 99933 -3.8% 96127 perf-stat.i.page-faults
> 2.69 +7.9% 2.91 perf-stat.overall.MPKI
> 9.08 +0.6 9.67 Â 2% perf-stat.overall.cache-miss-rate%
> 1.13 +5.0% 1.19 perf-stat.overall.cpi
> 4633 -8.6% 4233 perf-stat.overall.cycles-between-cache-misses
> 0.01 Â 2% -0.0 0.01 Â 4% perf-stat.overall.dTLB-store-miss-rate%
> 72.05 +2.2 74.24 perf-stat.overall.iTLB-load-miss-rate%
> 0.88 -4.8% 0.84 perf-stat.overall.ipc
> 78.12 +1.4 79.52 perf-stat.overall.node-load-miss-rate%
> 61.49 -2.5 59.01 perf-stat.overall.node-store-miss-rate%
> 5.688e+09 -3.8% 5.471e+09 perf-stat.ps.branch-instructions
> 1.02e+08 -3.7% 98177745 perf-stat.ps.branch-misses
> 7807912 +10.5% 8626353 perf-stat.ps.cache-misses
> 85999504 +3.8% 89266688 Â 2% perf-stat.ps.cache-references
> 2282 -1.9% 2239 perf-stat.ps.cpu-migrations
> 8.713e+09 -3.9% 8.376e+09 perf-stat.ps.dTLB-loads
> 525761 Â 3% -15.0% 446967 Â 5% perf-stat.ps.dTLB-store-misses
> 5.723e+09 -4.3% 5.478e+09 perf-stat.ps.dTLB-stores
> 52823322 -3.5% 50990190 perf-stat.ps.iTLB-load-misses
> 20490261 -13.7% 17689923 Â 2% perf-stat.ps.iTLB-loads
> 3.193e+10 -3.8% 3.071e+10 perf-stat.ps.instructions
> 99560 -3.8% 95786 perf-stat.ps.minor-faults
> 4308031 +5.6% 4550453 perf-stat.ps.node-load-misses
> 1405805 +20.9% 1699670 perf-stat.ps.node-store-misses
> 880319 +34.1% 1180378 perf-stat.ps.node-stores
> 99560 -3.8% 95786 perf-stat.ps.page-faults
> 9.664e+12 -2.8% 9.397e+12 perf-stat.total.instructions
> 33.09 Â 7% -25.0 8.10 Â 45% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
> 32.21 Â 7% -24.3 7.92 Â 45% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
> 22.36 Â 7% -17.1 5.24 Â 49% perf-profile.calltrace.cycles-pp.do_semtimedop.do_syscall_64.entry_SYSCALL_64_after_hwframe
> 10.57 Â 5% -8.3 2.31 Â 48% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64
> 9.60 Â 4% -7.6 2.03 Â 49% perf-profile.calltrace.cycles-pp.syscall_return_via_sysret
> 3.68 Â 8% -3.3 0.43 Â102% perf-profile.calltrace.cycles-pp.security_ipc_permission.do_semtimedop.do_syscall_64.entry_SYSCALL_64_after_hwframe
> 4.37 Â 9% -2.8 1.60 Â 50% perf-profile.calltrace.cycles-pp.ipc_obtain_object_check.do_semtimedop.do_syscall_64.entry_SYSCALL_64_after_hwframe
> 2.30 Â 2% -1.9 0.44 Â102% perf-profile.calltrace.cycles-pp.security_sem_semop.do_semtimedop.do_syscall_64.entry_SYSCALL_64_after_hwframe
> 5.10 Â 3% -1.6 3.46 Â 10% perf-profile.calltrace.cycles-pp.ipc_has_perm.security_ipc_permission.do_semtimedop.do_syscall_64.entry_SYSCALL_64_after_hwframe
> 3.20 -1.3 1.95 Â 23% perf-profile.calltrace.cycles-pp.avc_has_perm.ipc_has_perm.security_ipc_permission.do_semtimedop.do_syscall_64
> 2.12 Â 3% -0.4 1.76 Â 4% perf-profile.calltrace.cycles-pp.avc_has_perm.ipc_has_perm.security_sem_semop.do_semtimedop.do_syscall_64
> 1.03 Â 2% -0.1 0.94 Â 3% perf-profile.calltrace.cycles-pp.__do_page_fault.do_page_fault.page_fault
> 1.06 Â 2% -0.1 0.97 Â 3% perf-profile.calltrace.cycles-pp.do_page_fault.page_fault
> 1.06 Â 2% -0.1 0.98 Â 3% perf-profile.calltrace.cycles-pp.page_fault
> 0.91 Â 3% -0.1 0.82 Â 3% perf-profile.calltrace.cycles-pp.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
> 0.86 Â 3% -0.1 0.78 Â 3% perf-profile.calltrace.cycles-pp.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
> 0.61 Â 12% +0.2 0.79 Â 12% perf-profile.calltrace.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.do_idle.cpu_startup_entry
> 0.61 Â 10% +0.2 0.79 Â 10% perf-profile.calltrace.cycles-pp.shm_close.remove_vma.__do_munmap.ksys_shmdt.do_syscall_64
> 0.67 Â 11% +0.2 0.86 Â 10% perf-profile.calltrace.cycles-pp.apic_timer_interrupt.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
> 0.40 Â 57% +0.2 0.62 Â 8% perf-profile.calltrace.cycles-pp.call_rwsem_down_write_failed.down_write.do_shmat.__x64_sys_shmat.do_syscall_64
> 0.40 Â 57% +0.2 0.62 Â 7% perf-profile.calltrace.cycles-pp.rwsem_down_write_failed.call_rwsem_down_write_failed.down_write.do_shmat.__x64_sys_shmat
> 0.43 Â 57% +0.2 0.66 Â 7% perf-profile.calltrace.cycles-pp.down_write.do_shmat.__x64_sys_shmat.do_syscall_64.entry_SYSCALL_64_after_hwframe
> 0.41 Â 57% +0.3 0.67 Â 9% perf-profile.calltrace.cycles-pp.call_rwsem_down_write_failed.down_write.shm_close.remove_vma.__do_munmap
> 0.40 Â 57% +0.3 0.66 Â 7% perf-profile.calltrace.cycles-pp.down_write.ipcget.__x64_sys_shmget.do_syscall_64.entry_SYSCALL_64_after_hwframe
> 0.41 Â 57% +0.3 0.67 Â 9% perf-profile.calltrace.cycles-pp.rwsem_down_write_failed.call_rwsem_down_write_failed.down_write.shm_close.remove_vma
> 0.44 Â 57% +0.3 0.71 Â 9% perf-profile.calltrace.cycles-pp.down_write.shm_close.remove_vma.__do_munmap.ksys_shmdt
> 0.26 Â100% +0.4 0.67 Â 8% perf-profile.calltrace.cycles-pp.dput.__fput.task_work_run.exit_to_usermode_loop.do_syscall_64
> 0.28 Â100% +0.4 0.71 Â 7% perf-profile.calltrace.cycles-pp.__might_fault._copy_from_user.do_semtimedop.do_syscall_64.entry_SYSCALL_64_after_hwframe
> 0.58 Â 10% +0.4 1.03 Â 6% perf-profile.calltrace.cycles-pp.shmctl_down.ksys_shmctl.do_syscall_64.entry_SYSCALL_64_after_hwframe.shmctl
> 0.59 Â 10% +0.5 1.05 Â 6% perf-profile.calltrace.cycles-pp.ksys_shmctl.do_syscall_64.entry_SYSCALL_64_after_hwframe.shmctl
> 0.13 Â173% +0.5 0.62 Â 7% perf-profile.calltrace.cycles-pp.call_rwsem_down_write_failed.down_write.ipcget.__x64_sys_shmget.do_syscall_64
> 0.13 Â173% +0.5 0.62 Â 7% perf-profile.calltrace.cycles-pp.rwsem_down_write_failed.call_rwsem_down_write_failed.down_write.ipcget.__x64_sys_shmget
> 0.12 Â173% +0.5 0.66 Â 8% perf-profile.calltrace.cycles-pp.dentry_kill.dput.__fput.task_work_run.exit_to_usermode_loop
> 0.00 +0.6 0.58 Â 7% perf-profile.calltrace.cycles-pp.copy_user_generic_unrolled._copy_from_user.do_semtimedop.do_syscall_64.entry_SYSCALL_64_after_hwframe
> 0.00 +0.6 0.60 Â 7% perf-profile.calltrace.cycles-pp.unmap_region.__do_munmap.ksys_shmdt.do_syscall_64.entry_SYSCALL_64_after_hwframe
> 0.00 +0.6 0.61 Â 7% perf-profile.calltrace.cycles-pp.wait_consider_task.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64
> 0.00 +0.6 0.62 Â 4% perf-profile.calltrace.cycles-pp.semctl_down.ksys_semctl.do_syscall_64.entry_SYSCALL_64_after_hwframe.semctl
> 0.75 Â 8% +0.6 1.38 Â 8% perf-profile.calltrace.cycles-pp.__x64_sys_semop.do_syscall_64.entry_SYSCALL_64_after_hwframe.semop
> 0.00 +0.6 0.65 Â 9% perf-profile.calltrace.cycles-pp.ipcperms.do_semtimedop.do_syscall_64.entry_SYSCALL_64_after_hwframe.semop
> 0.00 +0.6 0.65 Â 8% perf-profile.calltrace.cycles-pp.__dentry_kill.dentry_kill.dput.__fput.task_work_run
> 0.89 Â 11% +0.7 1.57 Â 8% perf-profile.calltrace.cycles-pp.do_shmat.__x64_sys_shmat.do_syscall_64.entry_SYSCALL_64_after_hwframe.shmat
> 0.90 Â 11% +0.7 1.58 Â 8% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.shmat
> 0.90 Â 10% +0.7 1.58 Â 8% perf-profile.calltrace.cycles-pp.__x64_sys_shmat.do_syscall_64.entry_SYSCALL_64_after_hwframe.shmat
> 0.00 +0.7 0.68 Â 5% perf-profile.calltrace.cycles-pp.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64.entry_SYSCALL_64_after_hwframe
> 0.00 +0.7 0.69 Â 6% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.wait
> 0.00 +0.7 0.69 Â 6% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.wait
> 0.00 +0.7 0.69 Â 6% perf-profile.calltrace.cycles-pp.__do_sys_wait4.do_syscall_64.entry_SYSCALL_64_after_hwframe.wait
> 0.00 +0.7 0.69 Â 6% perf-profile.calltrace.cycles-pp.kernel_wait4.__do_sys_wait4.do_syscall_64.entry_SYSCALL_64_after_hwframe.wait
> 0.90 Â 11% +0.7 1.59 Â 8% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.shmat
> 0.00 +0.7 0.69 Â 5% perf-profile.calltrace.cycles-pp.wait
> 0.27 Â100% +0.7 0.99 Â 7% perf-profile.calltrace.cycles-pp.perform_atomic_semop.do_semtimedop.do_syscall_64.entry_SYSCALL_64_after_hwframe.semop
> 0.95 Â 11% +0.7 1.67 Â 8% perf-profile.calltrace.cycles-pp.shmat
> 0.00 +0.7 0.73 Â 6% perf-profile.calltrace.cycles-pp.ipcget.ksys_semget.do_syscall_64.entry_SYSCALL_64_after_hwframe.semget
> 0.00 +0.7 0.74 Â 8% perf-profile.calltrace.cycles-pp.task_work_run.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe.shmctl
> 0.72 Â 9% +0.7 1.46 Â 7% perf-profile.calltrace.cycles-pp._raw_spin_lock.do_semtimedop.do_syscall_64.entry_SYSCALL_64_after_hwframe.semop
> 0.00 +0.8 0.75 Â 5% perf-profile.calltrace.cycles-pp.ksys_semget.do_syscall_64.entry_SYSCALL_64_after_hwframe.semget
> 0.00 +0.8 0.76 Â 8% perf-profile.calltrace.cycles-pp.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe.shmctl
> 0.89 Â 9% +0.8 1.65 Â 9% perf-profile.calltrace.cycles-pp.__do_munmap.ksys_shmdt.do_syscall_64.entry_SYSCALL_64_after_hwframe.shmdt
> 0.00 +0.8 0.76 Â 5% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.semget
> 0.00 +0.8 0.76 Â 5% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.semget
> 0.90 Â 9% +0.8 1.68 Â 9% perf-profile.calltrace.cycles-pp.ksys_shmdt.do_syscall_64.entry_SYSCALL_64_after_hwframe.shmdt
> 0.00 +0.8 0.78 Â 6% perf-profile.calltrace.cycles-pp.ksys_semctl.do_syscall_64.entry_SYSCALL_64_after_hwframe.semctl
> 0.00 +0.8 0.80 Â 5% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.semctl
> 0.00 +0.8 0.80 Â 5% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.semctl
> 0.00 +0.8 0.85 Â 5% perf-profile.calltrace.cycles-pp.semget
> 0.97 Â 9% +0.8 1.82 Â 6% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.shmctl
> 0.97 Â 9% +0.8 1.82 Â 7% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.shmctl
> 0.94 Â 10% +0.9 1.79 Â 6% perf-profile.calltrace.cycles-pp._copy_from_user.do_semtimedop.do_syscall_64.entry_SYSCALL_64_after_hwframe.semop
> 1.04 Â 9% +0.9 1.91 Â 6% perf-profile.calltrace.cycles-pp.ipcget.__x64_sys_shmget.do_syscall_64.entry_SYSCALL_64_after_hwframe.shmget
> 1.05 Â 9% +0.9 1.92 Â 5% perf-profile.calltrace.cycles-pp.__x64_sys_shmget.do_syscall_64.entry_SYSCALL_64_after_hwframe.shmget
> 1.05 Â 9% +0.9 1.93 Â 6% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.shmget
> 1.05 Â 9% +0.9 1.93 Â 6% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.shmget
> 1.02 Â 9% +0.9 1.92 Â 7% perf-profile.calltrace.cycles-pp.shmctl
> 1.00 Â 9% +0.9 1.93 Â 9% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.shmdt
> 1.00 Â 8% +0.9 1.94 Â 9% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.shmdt
> 1.13 Â 8% +0.9 2.08 Â 6% perf-profile.calltrace.cycles-pp.shmget
> 0.00 +1.0 0.97 Â 5% perf-profile.calltrace.cycles-pp.semctl
> 1.03 Â 9% +1.0 1.99 Â 5% perf-profile.calltrace.cycles-pp.do_smart_update.do_semtimedop.do_syscall_64.entry_SYSCALL_64_after_hwframe.semop
> 1.04 Â 8% +1.0 2.03 Â 9% perf-profile.calltrace.cycles-pp.shmdt
> 2.61 Â 9% +1.3 3.89 Â 14% perf-profile.calltrace.cycles-pp.security_ipc_permission.do_semtimedop.do_syscall_64.entry_SYSCALL_64_after_hwframe.semop
> 1.65 Â 13% +2.0 3.65 Â 7% perf-profile.calltrace.cycles-pp.security_sem_semop.do_semtimedop.do_syscall_64.entry_SYSCALL_64_after_hwframe.semop
> 1.61 Â 6% +5.3 6.89 Â 2% perf-profile.calltrace.cycles-pp.idr_find.ipc_obtain_object_check.do_semtimedop.do_syscall_64.entry_SYSCALL_64_after_hwframe
> 6.45 Â 9% +6.2 12.63 Â 6% perf-profile.calltrace.cycles-pp.syscall_return_via_sysret.semop
> 2.92 Â 5% +8.0 10.88 Â 7% perf-profile.calltrace.cycles-pp.ipc_obtain_object_check.do_semtimedop.do_syscall_64.entry_SYSCALL_64_after_hwframe.semop
> 9.07 Â 8% +8.8 17.85 Â 6% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64.semop
> 15.56 Â 8% +18.8 34.38 Â 6% perf-profile.calltrace.cycles-pp.do_semtimedop.do_syscall_64.entry_SYSCALL_64_after_hwframe.semop
> 17.40 Â 8% +20.4 37.82 Â 6% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.semop
> 17.99 Â 8% +21.0 39.00 Â 6% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.semop
> 33.85 Â 8% +36.1 69.98 Â 6% perf-profile.calltrace.cycles-pp.semop
>
>
>
> reaim.time.percent_of_cpu_this_job_got
>
> 1800 +-+------------------------------------------------------------------+
> | O |
> 1780 +-OO O O O |
> 1760 O-+ O O OO |
> | |
> 1740 +-+ |
> | |
> 1720 +-+ O .+. |
> | O O + + |
> 1700 +-+ O .+ : |
> 1680 +-+ .+ .+. .+ .+.+ .+ : +. .+.++.|
> |.++.+.++ : .++ ++.+ + +.+.++.++ +.+.++.+.+ ++ |
> 1660 +-+ : + |
> | + |
> 1640 +-+------------------------------------------------------------------+
>
>
> [*] bisect-good sample
> [O] bisect-bad sample
>
>
>
> Disclaimer:
> Results have been estimated based on internal Intel analysis and are provided
> for informational purposes only. Any difference in system hardware or software
> design or configuration may affect actual performance.
>
>
> Thanks,
> Rong Chen



--
Kees Cook