[LKP] [kernel] fc7f0dd3817: -2.1% will-it-scale.per_thread_ops
From: Huang Ying
Date: Wed Jan 21 2015 - 21:40:02 EST
FYI, we noticed the below changes on
commit fc7f0dd381720ea5ee5818645f7d0e9dece41cb0 ("kernel: avoid overflow in cmp_range")
testbox/testcase/testparams: lituya/will-it-scale/powersave-mmap2
7ad4b4ae5757b896 fc7f0dd381720ea5ee5818645f
---------------- --------------------------
%stddev %change %stddev
\ | \
252693 Â 0% -2.2% 247031 Â 0% will-it-scale.per_thread_ops
0.18 Â 0% +1.8% 0.19 Â 0% will-it-scale.scalability
43536 Â 24% +276.2% 163774 Â 33% sched_debug.cpu#6.ttwu_local
3.55 Â 2% +36.2% 4.84 Â 2% perf-profile.cpu-cycles.___might_sleep.unmap_page_range.unmap_single_vma.unmap_vmas.unmap_region
8.49 Â 12% -29.5% 5.99 Â 5% perf-profile.cpu-cycles._raw_spin_lock_irqsave.__percpu_counter_add.do_munmap.vm_munmap.sys_munmap
12.27 Â 8% -20.2% 9.80 Â 3% perf-profile.cpu-cycles.__percpu_counter_add.do_munmap.vm_munmap.sys_munmap.system_call_fastpath
7.45 Â 7% -20.8% 5.90 Â 5% perf-profile.cpu-cycles._raw_spin_lock_irqsave.__percpu_counter_add.__vm_enough_memory.selinux_vm_enough_memory.security_vm_enough_memory_mm
11.11 Â 3% -12.9% 9.67 Â 3% perf-profile.cpu-cycles.__percpu_counter_add.__vm_enough_memory.selinux_vm_enough_memory.security_vm_enough_memory_mm.mmap_region
2.46 Â 3% +13.1% 2.78 Â 2% perf-profile.cpu-cycles.___might_sleep.unmap_single_vma.unmap_vmas.unmap_region.do_munmap
11.42 Â 3% -12.3% 10.01 Â 2% perf-profile.cpu-cycles.__vm_enough_memory.selinux_vm_enough_memory.security_vm_enough_memory_mm.mmap_region.do_mmap_pgoff
12.39 Â 3% -11.2% 11.00 Â 2% perf-profile.cpu-cycles.selinux_vm_enough_memory.security_vm_enough_memory_mm.mmap_region.do_mmap_pgoff.vm_mmap_pgoff
12.45 Â 3% -11.1% 11.07 Â 2% perf-profile.cpu-cycles.security_vm_enough_memory_mm.mmap_region.do_mmap_pgoff.vm_mmap_pgoff.sys_mmap_pgoff
14.38 Â 1% +9.5% 15.75 Â 1% perf-profile.cpu-cycles.unmap_page_range.unmap_single_vma.unmap_vmas.unmap_region.do_munmap
testbox/testcase/testparams: lituya/will-it-scale/performance-mmap2
7ad4b4ae5757b896 fc7f0dd381720ea5ee5818645f
---------------- --------------------------
268761 Â 0% -2.1% 263177 Â 0% will-it-scale.per_thread_ops
0.18 Â 0% +1.8% 0.19 Â 0% will-it-scale.scalability
0.01 Â 37% -99.3% 0.00 Â 12% sched_debug.rt_rq[10]:/.rt_time
104123 Â 41% -63.7% 37788 Â 45% sched_debug.cpu#5.ttwu_local
459901 Â 48% +60.7% 739071 Â 31% sched_debug.cpu#6.ttwu_count
1858053 Â 12% -36.9% 1171826 Â 38% sched_debug.cpu#10.sched_goidle
3716823 Â 12% -36.9% 2344353 Â 38% sched_debug.cpu#10.nr_switches
3777468 Â 11% -36.9% 2383575 Â 36% sched_debug.cpu#10.sched_count
36 Â 28% -40.9% 21 Â 7% sched_debug.cpu#6.cpu_load[1]
18042 Â 17% +54.0% 27789 Â 30% sched_debug.cfs_rq[4]:/.exec_clock
56 Â 17% -48.8% 29 Â 5% sched_debug.cfs_rq[6]:/.runnable_load_avg
36 Â 29% +43.6% 52 Â 11% sched_debug.cpu#4.load
594415 Â 4% +82.4% 1084432 Â 18% sched_debug.cpu#2.ttwu_count
15 Â 0% +51.1% 22 Â 14% sched_debug.cpu#4.cpu_load[4]
2077 Â 11% -36.7% 1315 Â 15% sched_debug.cpu#6.curr->pid
11 Â 28% +48.6% 17 Â 23% sched_debug.cpu#7.cpu_load[4]
0.00 Â 20% +77.0% 0.00 Â 26% sched_debug.rt_rq[5]:/.rt_time
16 Â 5% +52.1% 24 Â 9% sched_debug.cpu#4.cpu_load[3]
17 Â 11% +50.0% 26 Â 8% sched_debug.cpu#4.cpu_load[2]
48035 Â 7% -22.2% 37362 Â 24% sched_debug.cfs_rq[12]:/.exec_clock
34 Â 12% -24.5% 25 Â 20% sched_debug.cfs_rq[12]:/.runnable_load_avg
33 Â 11% -24.2% 25 Â 20% sched_debug.cpu#12.cpu_load[4]
19 Â 25% +50.9% 28 Â 3% sched_debug.cpu#4.cpu_load[1]
66 Â 17% -24.7% 49 Â 5% sched_debug.cpu#6.load
421462 Â 16% +18.8% 500676 Â 13% sched_debug.cfs_rq[1]:/.min_vruntime
3.60 Â 0% +35.4% 4.87 Â 0% perf-profile.cpu-cycles.___might_sleep.unmap_page_range.unmap_single_vma.unmap_vmas.unmap_region
44 Â 9% +37.9% 60 Â 17% sched_debug.cpu#3.load
37 Â 6% -17.9% 30 Â 15% sched_debug.cpu#15.cpu_load[3]
6.96 Â 4% -10.4% 6.24 Â 3% perf-profile.cpu-cycles._raw_spin_lock_irqsave.__percpu_counter_add.do_munmap.vm_munmap.sys_munmap
36 Â 6% +24.1% 44 Â 2% sched_debug.cpu#2.load
39 Â 7% -16.9% 32 Â 12% sched_debug.cpu#15.cpu_load[2]
1528695 Â 6% -19.5% 1230190 Â 16% sched_debug.cpu#10.ttwu_count
36 Â 6% +27.3% 46 Â 9% sched_debug.cpu#10.load
447 Â 3% -13.9% 385 Â 10% sched_debug.cfs_rq[15]:/.tg_runnable_contrib
20528 Â 3% -13.8% 17701 Â 10% sched_debug.cfs_rq[15]:/.avg->runnable_avg_sum
634808 Â 6% +50.3% 954347 Â 24% sched_debug.cpu#2.sched_goidle
1270648 Â 6% +50.3% 1909528 Â 24% sched_debug.cpu#2.nr_switches
1284042 Â 6% +51.4% 1944604 Â 23% sched_debug.cpu#2.sched_count
55 Â 11% +28.7% 71 Â 4% sched_debug.cpu#8.cpu_load[0]
6.39 Â 0% -8.7% 5.84 Â 2% perf-profile.cpu-cycles._raw_spin_lock_irqsave.__percpu_counter_add.__vm_enough_memory.selinux_vm_enough_memory.security_vm_enough_memory_mm
48721 Â 11% +19.1% 58037 Â 5% sched_debug.cpu#11.nr_load_updates
53 Â 9% +16.1% 62 Â 1% sched_debug.cpu#8.cpu_load[1]
1909 Â 0% +22.2% 2333 Â 9% sched_debug.cpu#3.curr->pid
0.95 Â 4% -8.4% 0.87 Â 4% perf-profile.cpu-cycles.file_map_prot_check.selinux_mmap_file.security_mmap_file.vm_mmap_pgoff.sys_mmap_pgoff
567608 Â 8% +11.0% 629780 Â 4% sched_debug.cfs_rq[14]:/.min_vruntime
804637 Â 15% +24.4% 1000664 Â 13% sched_debug.cpu#3.ttwu_count
684460 Â 5% -9.6% 618867 Â 3% sched_debug.cpu#14.avg_idle
1.02 Â 4% -7.2% 0.94 Â 4% perf-profile.cpu-cycles.selinux_mmap_file.security_mmap_file.vm_mmap_pgoff.sys_mmap_pgoff.sys_mmap
2605 Â 2% -5.8% 2454 Â 5% slabinfo.kmalloc-96.active_objs
2605 Â 2% -5.8% 2454 Â 5% slabinfo.kmalloc-96.num_objs
50 Â 4% +11.3% 56 Â 1% sched_debug.cfs_rq[8]:/.runnable_load_avg
1.15 Â 4% -6.4% 1.08 Â 4% perf-profile.cpu-cycles.security_mmap_file.vm_mmap_pgoff.sys_mmap_pgoff.sys_mmap.system_call_fastpath
1.07 Â 2% +9.7% 1.17 Â 3% perf-profile.cpu-cycles.vma_compute_subtree_gap.__vma_link_rb.vma_link.mmap_region.do_mmap_pgoff
To reproduce:
apt-get install ruby ruby-oj
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/setup-local job.yaml # the job file attached in this email
bin/run-local job.yaml
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Huang, Ying
---
testcase: will-it-scale
default-monitors:
wait: pre-test
uptime:
iostat:
vmstat:
numa-numastat:
numa-vmstat:
numa-meminfo:
proc-vmstat:
proc-stat:
meminfo:
slabinfo:
interrupts:
lock_stat:
latency_stats:
softirqs:
bdi_dev_mapping:
diskstats:
nfsstat:
cpuidle:
cpufreq-stats:
turbostat:
pmeter:
sched_debug:
interval: 10
default_watchdogs:
watch-oom:
watchdog:
cpufreq_governor:
- powersave
commit: ec6f34e5b552fb0a52e6aae1a5afbbb1605cc6cc
model: Grantley Haswell
nr_cpu: 16
memory: 16G
hdd_partitions:
swap_partitions:
rootfs_partition:
perf-profile:
freq: 800
will-it-scale:
test:
- mmap2
testbox: lituya
tbox_group: lituya
kconfig: x86_64-rhel
enqueue_time: 2015-01-18 14:10:07.541442957 +08:00
head_commit: b213d55915f2ee6748ba62f743b5e70564ab31e7
base_commit: ec6f34e5b552fb0a52e6aae1a5afbbb1605cc6cc
branch: linux-devel/devel-hourly-2015011917
kernel: "/kernel/x86_64-rhel/ec6f34e5b552fb0a52e6aae1a5afbbb1605cc6cc/vmlinuz-3.19.0-rc5-gec6f34e"
user: lkp
queue: cyclic
rootfs: debian-x86_64-2015-01-13.cgz
result_root: "/result/lituya/will-it-scale/powersave-mmap2/debian-x86_64-2015-01-13.cgz/x86_64-rhel/ec6f34e5b552fb0a52e6aae1a5afbbb1605cc6cc/0"
job_file: "/lkp/scheduled/lituya/cyclic_will-it-scale-powersave-mmap2-x86_64-rhel-BASE-ec6f34e5b552fb0a52e6aae1a5afbbb1605cc6cc-0.yaml"
dequeue_time: 2015-01-19 18:08:35.232473498 +08:00
job_state: finished
loadavg: 11.32 6.60 2.70 1/178 7099
start_time: '1421662149'
end_time: '1421662453'
version: "/lkp/lkp/.src-20150119-113749"
./runtest.py mmap2 32 1 8 12 16
_______________________________________________
LKP mailing list
LKP@xxxxxxxxxxxxxxx