[LKP] [vmstat] ba4877b9ca5: not primary result change, -62.5% will-it-scale.time.involuntary_context_switches

From: Huang Ying
Date: Wed Feb 25 2015 - 22:11:11 EST


FYI, we noticed the below changes on

git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
commit ba4877b9ca51f80b5d30f304a46762f0509e1635 ("vmstat: do not use deferrable delayed work for vmstat_update")

testbox/testcase/testparams: wsm/will-it-scale/performance-malloc1

9c0415eb8cbf0c8f ba4877b9ca51f80b5d30f304a4
---------------- --------------------------
%stddev %change %stddev
\ | \
1194 Â 0% -62.5% 447 Â 7% will-it-scale.time.involuntary_context_switches
246 Â 0% +2.3% 252 Â 1% will-it-scale.time.system_time
18001.54 Â 22% -100.0% 0.00 Â 0% sched_debug.cfs_rq[3]:/.MIN_vruntime
18001.54 Â 22% -100.0% 0.00 Â 0% sched_debug.cfs_rq[3]:/.max_vruntime
1097152 Â 3% -82.4% 192865 Â 1% cpuidle.C6-NHM.usage
99560 Â 16% +57.7% 157029 Â 23% sched_debug.cfs_rq[8]:/.spread0
27671 Â 23% -65.9% 9439 Â 8% sched_debug.cfs_rq[5]:/.exec_clock
1194 Â 0% -62.5% 447 Â 7% time.involuntary_context_switches
247334 Â 20% -61.2% 96086 Â 3% sched_debug.cfs_rq[5]:/.min_vruntime
20417 Â 35% -48.7% 10473 Â 8% sched_debug.cfs_rq[3]:/.exec_clock
104076 Â 38% +73.9% 181000 Â 30% sched_debug.cpu#2.ttwu_local
180071 Â 29% -41.3% 105641 Â 10% sched_debug.cfs_rq[3]:/.min_vruntime
34 Â 14% -48.6% 17 Â 10% sched_debug.cpu#5.cpu_load[4]
43629 Â 18% -32.7% 29370 Â 13% sched_debug.cpu#3.nr_load_updates
42653 Â 14% -42.6% 24488 Â 14% sched_debug.cpu#5.nr_load_updates
13660 Â 9% -41.4% 8010 Â 3% sched_debug.cfs_rq[5]:/.avg->runnable_avg_sum
296 Â 9% -41.2% 174 Â 3% sched_debug.cfs_rq[5]:/.tg_runnable_contrib
205846 Â 6% -11.2% 182783 Â 6% sched_debug.cpu#7.sched_count
37 Â 10% -38.4% 23 Â 8% sched_debug.cpu#5.cpu_load[3]
1378 Â 12% -20.6% 1094 Â 4% sched_debug.cpu#11.ttwu_local
205691 Â 6% -11.2% 182623 Â 6% sched_debug.cpu#7.nr_switches
102423 Â 6% -11.2% 90915 Â 6% sched_debug.cpu#7.sched_goidle
25 Â 21% +41.6% 35 Â 17% sched_debug.cpu#3.cpu_load[0]
68 Â 16% -29.3% 48 Â 9% sched_debug.cpu#8.cpu_load[0]
32 Â 14% +54.2% 50 Â 6% sched_debug.cpu#11.cpu_load[4]
507 Â 10% -30.0% 355 Â 3% sched_debug.cfs_rq[10]:/.blocked_load_avg
39084 Â 16% +48.0% 57862 Â 2% sched_debug.cfs_rq[11]:/.exec_clock
10022712 Â 9% -28.8% 7139491 Â 13% cpuidle.C1-NHM.time
341246 Â 14% +47.3% 502560 Â 6% sched_debug.cfs_rq[11]:/.min_vruntime
562 Â 9% -28.8% 400 Â 4% sched_debug.cfs_rq[10]:/.tg_load_contrib
66 Â 7% -20.8% 52 Â 14% sched_debug.cfs_rq[8]:/.runnable_load_avg
36 Â 18% +45.8% 52 Â 6% sched_debug.cpu#11.cpu_load[3]
43079 Â 1% +8.0% 46513 Â 2% softirqs.RCU
43 Â 9% -25.6% 32 Â 10% sched_debug.cpu#5.cpu_load[2]
1745173 Â 4% +43.2% 2499517 Â 3% cpuidle.C3-NHM.usage
44 Â 18% +25.3% 55 Â 10% sched_debug.cpu#9.cpu_load[2]
64453 Â 8% +27.0% 81824 Â 3% sched_debug.cpu#11.nr_load_updates
58719 Â 7% -14.3% 50299 Â 9% sched_debug.cpu#0.ttwu_count
40 Â 16% +24.7% 50 Â 3% sched_debug.cpu#9.cpu_load[4]
42 Â 16% +26.2% 53 Â 5% sched_debug.cpu#9.cpu_load[3]
61887 Â 4% -16.2% 51890 Â 11% sched_debug.cpu#0.sched_goidle
125652 Â 4% -16.1% 105434 Â 10% sched_debug.cpu#0.nr_switches
125769 Â 4% -16.1% 105564 Â 10% sched_debug.cpu#0.sched_count
16164 Â 7% +35.2% 21852 Â 1% sched_debug.cfs_rq[11]:/.avg->runnable_avg_sum
352 Â 7% +34.9% 475 Â 1% sched_debug.cfs_rq[11]:/.tg_runnable_contrib
1442 Â 11% +20.9% 1742 Â 3% sched_debug.cpu#11.curr->pid
7.243e+08 Â 1% +20.0% 8.69e+08 Â 3% cpuidle.C3-NHM.time
172138 Â 5% +11.9% 192649 Â 6% sched_debug.cpu#9.sched_count
85576 Â 5% +12.0% 95879 Â 6% sched_debug.cpu#9.sched_goidle
91826 Â 0% +13.0% 103784 Â 11% sched_debug.cfs_rq[6]:/.exec_clock
46977 Â 15% +21.8% 57227 Â 2% sched_debug.cfs_rq[9]:/.exec_clock
115370 Â 1% +11.5% 128602 Â 8% sched_debug.cpu#6.nr_load_updates
67629 Â 10% +19.7% 80928 Â 0% sched_debug.cpu#9.nr_load_updates
0.92 Â 4% +9.2% 1.00 Â 3% perf-profile.cpu-cycles.__vma_link_rb.vma_link.mmap_region.do_mmap_pgoff.vm_mmap_pgoff
0.89 Â 3% +9.5% 0.98 Â 5% perf-profile.cpu-cycles._cond_resched.unmap_single_vma.unmap_vmas.unmap_region.do_munmap
17.84 Â 3% -7.2% 16.56 Â 1% turbostat.CPU%c6
10197 Â 0% +2.5% 10455 Â 1% vmstat.system.in

testbox/testcase/testparams: lkp-sb03/will-it-scale/malloc1

9c0415eb8cbf0c8f ba4877b9ca51f80b5d30f304a4
---------------- --------------------------
2585 Â 2% -69.2% 797 Â 8% will-it-scale.time.involuntary_context_switches
78369 Â 36% +156.1% 200708 Â 19% cpuidle.C3-SNB.usage
95820 Â 11% +60.9% 154175 Â 19% sched_debug.cfs_rq[28]:/.spread0
95549 Â 10% +61.3% 154133 Â 20% sched_debug.cfs_rq[26]:/.spread0
95600 Â 10% +60.3% 153220 Â 19% sched_debug.cfs_rq[29]:/.spread0
97285 Â 8% +57.9% 153634 Â 19% sched_debug.cfs_rq[31]:/.spread0
254274 Â 29% +39.0% 353345 Â 7% sched_debug.cfs_rq[20]:/.spread0
297854 Â 3% +18.5% 353038 Â 8% sched_debug.cfs_rq[22]:/.spread0
298185 Â 2% +18.1% 352124 Â 8% sched_debug.cfs_rq[17]:/.spread0
296875 Â 3% +19.4% 354400 Â 7% sched_debug.cfs_rq[18]:/.spread0
297800 Â 3% +18.5% 352927 Â 7% sched_debug.cfs_rq[21]:/.spread0
0.00 Â 8% +142.4% 0.00 Â 33% sched_debug.rt_rq[8]:/.rt_time
2585 Â 2% -69.2% 797 Â 8% time.involuntary_context_switches
29637066 Â 30% +101.3% 59653820 Â 24% cpuidle.C3-SNB.time
40 Â 43% +105.5% 83 Â 14% sched_debug.cpu#0.cpu_load[4]
11 Â 26% +91.5% 22 Â 4% sched_debug.cfs_rq[7]:/.runnable_load_avg
39 Â 40% +104.5% 79 Â 13% sched_debug.cpu#0.cpu_load[3]
531 Â 10% +75.1% 930 Â 44% sched_debug.cpu#26.ttwu_local
36 Â 34% +91.1% 69 Â 12% sched_debug.cpu#0.cpu_load[2]
95262 Â 11% +60.9% 153293 Â 18% sched_debug.cfs_rq[27]:/.spread0
120 Â 19% -53.7% 55 Â 42% sched_debug.cfs_rq[17]:/.tg_load_contrib
278957 Â 26% +57.1% 438311 Â 17% cpuidle.C1E-SNB.usage
29 Â 30% +62.7% 48 Â 18% sched_debug.cfs_rq[0]:/.load
33 Â 27% +66.7% 56 Â 10% sched_debug.cpu#0.cpu_load[1]
68 Â 23% -32.2% 46 Â 18% sched_debug.cpu#16.load
295 Â 9% +46.9% 434 Â 28% sched_debug.cpu#17.ttwu_local
16 Â 41% +95.3% 31 Â 36% sched_debug.cpu#7.load
42 Â 20% -32.2% 29 Â 16% sched_debug.cpu#21.cpu_load[0]
50555 Â 17% -30.4% 35165 Â 3% sched_debug.cpu#26.sched_count
19 Â 25% -24.7% 14 Â 14% sched_debug.cpu#29.cpu_load[1]
24874 Â 18% -30.9% 17181 Â 5% sched_debug.cpu#26.sched_goidle
50298 Â 17% -30.3% 35047 Â 3% sched_debug.cpu#26.nr_switches
34788152 Â 26% +49.5% 52019925 Â 15% cpuidle.C1E-SNB.time
8 Â 37% +87.5% 15 Â 12% sched_debug.cpu#8.cpu_load[2]
93498 Â 4% +11.4% 104199 Â 7% softirqs.RCU
28 Â 24% +44.2% 40 Â 12% sched_debug.cfs_rq[0]:/.runnable_load_avg
3508 Â 5% +21.1% 4247 Â 11% numa-vmstat.node1.nr_anon_pages
14073 Â 6% +20.8% 16993 Â 11% numa-meminfo.node1.AnonPages
5 Â 15% +45.5% 8 Â 8% sched_debug.cpu#8.cpu_load[4]
1651 Â 16% +54.6% 2554 Â 29% sched_debug.cpu#1.ttwu_local
35 Â 28% +36.9% 48 Â 17% sched_debug.cpu#0.cpu_load[0]
173 Â 12% -17.7% 142 Â 4% sched_debug.cfs_rq[14]:/.tg_runnable_contrib
25918 Â 19% -26.8% 18974 Â 2% sched_debug.cpu#26.ttwu_count
8010 Â 12% -17.8% 6582 Â 4% sched_debug.cfs_rq[14]:/.avg->runnable_avg_sum
6 Â 25% +65.4% 10 Â 12% sched_debug.cpu#8.cpu_load[3]
15670 Â 10% +14.3% 17912 Â 9% numa-vmstat.node1.numa_other
297389 Â 3% +22.0% 362854 Â 11% sched_debug.cfs_rq[23]:/.spread0
297771 Â 3% +18.8% 353825 Â 8% sched_debug.cfs_rq[19]:/.spread0
6713 Â 3% +10.3% 7405 Â 4% sched_debug.cfs_rq[11]:/.avg->runnable_avg_sum
145 Â 3% +10.1% 160 Â 4% sched_debug.cfs_rq[11]:/.tg_runnable_contrib
2566 Â 7% -9.6% 2319 Â 5% sched_debug.cpu#21.curr->pid
4694 Â 10% +14.4% 5368 Â 6% sched_debug.cpu#0.ttwu_local
37 Â 8% -19.9% 30 Â 14% sched_debug.cpu#21.cpu_load[1]
33072 Â 10% -19.5% 26612 Â 9% sched_debug.cpu#11.nr_switches
16783 Â 8% -20.1% 13407 Â 14% numa-meminfo.node0.AnonPages
4198 Â 7% -19.9% 3365 Â 14% numa-vmstat.node0.nr_anon_pages
3458 Â 7% -9.8% 3120 Â 1% sched_debug.cfs_rq[30]:/.tg_load_avg
3451 Â 7% -9.4% 3126 Â 2% sched_debug.cfs_rq[31]:/.tg_load_avg
23550 Â 1% -25.1% 17646 Â 19% sched_debug.cpu#28.sched_goidle
3468 Â 7% -9.1% 3154 Â 1% sched_debug.cfs_rq[29]:/.tg_load_avg
1493 Â 11% +22.2% 1823 Â 8% sched_debug.cpu#2.curr->pid
38654 Â 6% -10.1% 34735 Â 4% sched_debug.cpu#14.nr_load_updates
16449 Â 8% -15.7% 13867 Â 8% sched_debug.cpu#11.ttwu_count
47593 Â 1% -23.4% 36466 Â 21% sched_debug.cpu#28.nr_switches
6164 Â 1% +8.5% 6687 Â 4% sched_debug.cfs_rq[12]:/.exec_clock

testbox/testcase/testparams: lkp-sbx04/will-it-scale/performance-malloc1

9c0415eb8cbf0c8f ba4877b9ca51f80b5d30f304a4
---------------- --------------------------
4389 Â 2% -66.0% 1494 Â 0% will-it-scale.time.involuntary_context_switches
37594 Â 32% +542.8% 241666 Â 9% cpuidle.C3-SNB.usage
12 Â 38% -60.4% 4 Â 27% sched_debug.cpu#56.load
73932 Â 14% -48.3% 38186 Â 43% sched_debug.cpu#7.ttwu_count
2 Â 0% +175.0% 5 Â 47% sched_debug.cpu#11.cpu_load[2]
23 Â 43% +206.5% 70 Â 39% sched_debug.cfs_rq[55]:/.blocked_load_avg
4389 Â 2% -66.0% 1494 Â 0% time.involuntary_context_switches
73 Â 44% -53.7% 34 Â 29% sched_debug.cfs_rq[33]:/.tg_load_contrib
14 Â 29% +125.9% 32 Â 37% sched_debug.cpu#45.load
1.324e+08 Â 29% -63.7% 48101669 Â 16% cpuidle.C1-SNB.time
34290260 Â 6% +165.5% 91052161 Â 14% cpuidle.C3-SNB.time
12 Â 25% +78.0% 22 Â 14% sched_debug.cpu#0.cpu_load[4]
2 Â 19% -55.6% 1 Â 0% sched_debug.cfs_rq[54]:/.nr_spread_over
12 Â 0% +145.8% 29 Â 46% sched_debug.cfs_rq[45]:/.load
5215 Â 18% -55.2% 2334 Â 22% numa-vmstat.node2.nr_active_anon
20854 Â 18% -55.3% 9329 Â 22% numa-meminfo.node2.Active(anon)
316 Â 17% +68.0% 531 Â 25% sched_debug.cpu#62.ttwu_local
176 Â 10% +54.4% 272 Â 21% sched_debug.cpu#39.ttwu_local
157060 Â 19% -48.4% 81039 Â 39% sched_debug.cpu#7.sched_count
171170 Â 34% +62.8% 278733 Â 11% cpuidle.C1E-SNB.usage
0.00 Â 10% +41.8% 0.00 Â 19% sched_debug.rt_rq[36]:/.rt_time
243909 Â 31% +72.6% 421059 Â 5% sched_debug.cfs_rq[51]:/.spread0
12 Â 25% +27.1% 15 Â 21% sched_debug.cpu#0.cpu_load[1]
143112 Â 14% -46.3% 76834 Â 44% sched_debug.cpu#7.nr_switches
71413 Â 14% -46.3% 38314 Â 44% sched_debug.cpu#7.sched_goidle
13 Â 12% +41.5% 18 Â 23% sched_debug.cpu#46.cpu_load[0]
1024 Â 27% -27.2% 745 Â 26% sched_debug.cpu#15.ttwu_local
1061 Â 9% -34.8% 692 Â 2% sched_debug.cpu#30.curr->pid
744 Â 8% +43.5% 1068 Â 18% sched_debug.cpu#20.curr->pid
0.00 Â 24% +76.1% 0.00 Â 14% sched_debug.rt_rq[16]:/.rt_time
308 Â 11% +79.2% 552 Â 35% sched_debug.cpu#57.ttwu_local
28950 Â 29% -37.0% 18242 Â 16% sched_debug.cpu#23.sched_count
14117 Â 17% +55.5% 21946 Â 17% sched_debug.cpu#13.sched_goidle
13969 Â 16% +59.1% 22223 Â 18% sched_debug.cpu#13.ttwu_count
28524 Â 16% +54.6% 44106 Â 17% sched_debug.cpu#13.nr_switches
3587 Â 12% -22.7% 2774 Â 18% numa-vmstat.node2.nr_slab_reclaimable
14352 Â 12% -22.7% 11099 Â 18% numa-meminfo.node2.SReclaimable
29903 Â 7% +29.5% 38737 Â 14% numa-meminfo.node1.Active
91841976 Â 13% -27.9% 66180100 Â 13% cpuidle.C1E-SNB.time
76 Â 11% +34.1% 102 Â 24% sched_debug.cfs_rq[40]:/.tg_load_contrib
745 Â 14% +15.8% 863 Â 18% sched_debug.cpu#31.curr->pid
42244 Â 9% -27.8% 30503 Â 8% numa-meminfo.node2.Active
28600 Â 2% +25.5% 35889 Â 12% numa-meminfo.node0.Active
284 Â 17% +30.5% 371 Â 1% sched_debug.cpu#44.ttwu_local
655478 Â 13% -20.0% 524404 Â 3% sched_debug.cfs_rq[0]:/.min_vruntime
42280 Â 2% -23.1% 32510 Â 14% sched_debug.cpu#45.ttwu_count
290 Â 7% +25.9% 365 Â 10% sched_debug.cpu#50.ttwu_local
83350 Â 2% -23.2% 64039 Â 15% sched_debug.cpu#45.nr_switches
41131 Â 3% -22.4% 31900 Â 15% sched_debug.cpu#45.sched_goidle
83731 Â 2% -23.1% 64394 Â 15% sched_debug.cpu#45.sched_count
317 Â 17% +25.5% 398 Â 11% sched_debug.cpu#52.ttwu_local
264 Â 6% +53.6% 406 Â 36% sched_debug.cpu#46.ttwu_local
41799 Â 7% -13.2% 36279 Â 13% sched_debug.cpu#51.nr_switches
42064 Â 7% -13.1% 36535 Â 13% sched_debug.cpu#51.sched_count
12557 Â 27% +56.5% 19654 Â 27% sched_debug.cpu#57.sched_count
10442 Â 6% -12.3% 9152 Â 8% sched_debug.cfs_rq[7]:/.exec_clock
56292 Â 7% -15.4% 47608 Â 13% sched_debug.cpu#7.nr_load_updates
1174 Â 18% +45.2% 1704 Â 11% sched_debug.cpu#11.curr->pid
286 Â 13% +32.5% 379 Â 9% sched_debug.cpu#55.ttwu_local
288745 Â 30% +45.7% 420730 Â 5% sched_debug.cfs_rq[53]:/.spread0
287389 Â 30% +46.5% 420927 Â 5% sched_debug.cfs_rq[52]:/.spread0
2584 Â 3% +11.2% 2872 Â 6% sched_debug.cpu#45.curr->pid
289910 Â 31% +45.7% 422398 Â 5% sched_debug.cfs_rq[54]:/.spread0
293040 Â 31% +42.7% 418044 Â 4% sched_debug.cfs_rq[49]:/.spread0
35054 Â 5% -9.1% 31878 Â 7% sched_debug.cpu#30.nr_load_updates
37803 Â 10% +12.3% 42455 Â 5% sched_debug.cpu#43.sched_goidle
99686 Â 4% -6.0% 93667 Â 5% sched_debug.cpu#38.nr_load_updates
39264 Â 6% +12.8% 44305 Â 4% sched_debug.cpu#43.ttwu_count
3884 Â 16% -14.7% 3311 Â 2% sched_debug.cfs_rq[30]:/.avg->runnable_avg_sum

testbox/testcase/testparams: xps2/pigz/performance-100%-512K

9c0415eb8cbf0c8f ba4877b9ca51f80b5d30f304a4
---------------- --------------------------
26318 Â 1% -4.7% 25068 Â 3% pigz.time.maximum_resident_set_size
1 Â 0% -100.0% 0 Â 0% sched_debug.cfs_rq[0]:/.nr_running
1706 Â 7% -59.5% 691 Â 15% sched_debug.cpu#6.sched_goidle
1.13 Â 38% -51.1% 0.55 Â 40% perf-profile.cpu-cycles.copy_process.part.26.do_fork.sys_clone.stub_clone
1.18 Â 32% -48.9% 0.60 Â 39% perf-profile.cpu-cycles.sys_clone.stub_clone
11 Â 4% -56.5% 5 Â 42% sched_debug.cfs_rq[3]:/.nr_spread_over
1.18 Â 32% -48.9% 0.60 Â 39% perf-profile.cpu-cycles.stub_clone
1.18 Â 32% -48.9% 0.60 Â 39% perf-profile.cpu-cycles.do_fork.sys_clone.stub_clone
1.63 Â 27% -50.3% 0.81 Â 24% perf-profile.cpu-cycles.__do_softirq.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt
0.00 Â 19% -52.3% 0.00 Â 49% sched_debug.rt_rq[1]:/.rt_time
1.88 Â 15% -32.3% 1.27 Â 17% perf-profile.cpu-cycles.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt
5059 Â 16% -45.2% 2773 Â 39% sched_debug.cpu#3.sched_goidle
138 Â 2% -8.2% 126 Â 4% sched_debug.cpu#2.cpu_load[1]
126 Â 6% -12.5% 110 Â 2% sched_debug.cpu#7.load
14 Â 7% -41.1% 8 Â 34% sched_debug.cfs_rq[4]:/.nr_spread_over
121 Â 2% +15.0% 139 Â 3% sched_debug.cfs_rq[1]:/.load
122 Â 3% +14.5% 139 Â 3% sched_debug.cpu#1.load
320 Â 42% +113.6% 683 Â 10% sched_debug.cfs_rq[1]:/.tg_load_contrib
351 Â 1% +23.3% 433 Â 4% cpuidle.C3-NHM.usage
1.39 Â 3% -19.6% 1.12 Â 3% perf-profile.cpu-cycles.ret_from_fork
1.62 Â 3% -28.1% 1.17 Â 25% perf-profile.cpu-cycles.__do_page_fault.do_page_fault.page_fault
1.62 Â 3% -26.5% 1.19 Â 27% perf-profile.cpu-cycles.do_page_fault.page_fault
1.77 Â 6% -20.3% 1.41 Â 17% perf-profile.cpu-cycles.page_fault
1.52 Â 2% -31.6% 1.04 Â 24% perf-profile.cpu-cycles.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
1.34 Â 0% -18.5% 1.09 Â 7% perf-profile.cpu-cycles.kthread.ret_from_fork
126 Â 6% -12.5% 110 Â 2% sched_debug.cfs_rq[7]:/.load
15.23 Â 2% -13.7% 13.15 Â 3% perf-profile.cpu-cycles.copy_page_to_iter.pipe_read.new_sync_read.__vfs_read.vfs_read
126 Â 3% +19.2% 150 Â 2% sched_debug.cfs_rq[3]:/.load
126 Â 3% +19.2% 150 Â 2% sched_debug.cpu#3.load
14.38 Â 2% -12.4% 12.60 Â 5% perf-profile.cpu-cycles.copy_user_generic_string.copy_page_to_iter.pipe_read.new_sync_read.__vfs_read

xps2: Nehalem
Memory: 4G

wsm: Westmere
Memory: 6G

lkp-sb03: Sandy Bridge-EP
Memory: 64G

lkp-sbx04: Sandy Bridge-EX
Memory: 64G



time.involuntary_context_switches

1300 ++-------------------------------------------------------------------+
1200 ** *.* .* **. *.***. *. * *. **. *. *.* *. *.*|
| + .* : * * .**. *.* * * * *.* * **. : ***.* * * *
1100 ++ ** * * * * |
1000 ++ |
| |
900 ++ |
800 ++ |
700 ++ |
| |
600 ++ |
500 ++ O O O |
OO OO OOO OO O O OO OO OOO OO O OO OOO OO O O |
400 ++ O O |
300 ++-------------------------------------------------------------------+


cpuidle.C3-NHM.time

9.5e+08 ++----------------------------------------------------------------+
| O |
9e+08 ++ O |
| O O O O O |
O OO OOO OO O O O O OOO O O OOO O |
8.5e+08 +O O O O O O O |
| |
8e+08 ++ |
| |
7.5e+08 ++ |
| .* .* *. * * .** ** .* * *
| **.* *** * * * :+ ** *.** + :.* *. *.**.* * * .* *.*|
7e+08 *+.* * + * * * * ** * * |
|* * |
6.5e+08 ++----------------------------------------------------------------+


cpuidle.C6-NHM.time

1.6e+09 ++---------------------------------------------------------------+
| .* * * * *|
1.55e+09 **.* * : :+ *.* * * * * : .***. **.* .** :+ ::|
| :: : ** : : ::+ ::.** : *.*** * :* : * * : |
| * * * : * * * *.* * *.* ** |
1.5e+09 ++ * *
| |
1.45e+09 ++ |
| O O |
1.4e+09 OO O O O O OOO OO O O O O |
| O O O OO OO OOO O O O O |
| O O O |
1.35e+09 ++ O |
| |
1.3e+09 ++---------------------------------------------------------------+


cpuidle.C6-NHM.usage

1.4e+06 ++----------------------------------------------------*-----------+
**. *. * *. * : |
1.2e+06 ++ ** **. ** *. *. **. * * * : ***.* :.* : : **.** |
| *.* * + : * *.** * * + * * *.* *.* :.**
1e+06 ++ * * * * |
| |
800000 ++ |
| |
600000 ++ |
| |
400000 ++ |
| |
200000 OO OOO OOO OOO OO OOO OOO OOO OOO OOO OOO OOO O |
| |
0 ++----------------------------------------------------------------+


will-it-scale.time.involuntary_context_switches

1300 ++-------------------------------------------------------------------+
1200 ** *.* .* **. *.***. *. * *. **. *. *.* *. *.*|
| + .* : * * .**. *.* * * * *.* * **. : ***.* * * *
1100 ++ ** * * * * |
1000 ++ |
| |
900 ++ |
800 ++ |
700 ++ |
| |
600 ++ |
500 ++ O O O |
OO OO OOO OO O O OO OO OOO OO O OO OOO OO O O |
400 ++ O O |
300 ++-------------------------------------------------------------------+


[*] bisect-good sample
[O] bisect-bad sample

To reproduce:

apt-get install ruby
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/setup-local job.yaml # the job file attached in this email
bin/run-local job.yaml


Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


Thanks,
Ying Huang

---
testcase: will-it-scale
default-monitors:
wait: pre-test
uptime:
iostat:
vmstat:
numa-numastat:
numa-vmstat:
numa-meminfo:
proc-vmstat:
proc-stat:
meminfo:
slabinfo:
interrupts:
lock_stat:
latency_stats:
softirqs:
bdi_dev_mapping:
diskstats:
nfsstat:
cpuidle:
cpufreq-stats:
turbostat:
pmeter:
sched_debug:
interval: 10
default_watchdogs:
watch-oom:
watchdog:
cpufreq_governor: performance
commit: ea2bbe3b9bf930408db205344fe10c8f719ba738
model: Westmere
memory: 6G
nr_hdd_partitions: 1
hdd_partitions:
swap_partitions:
rootfs_partition:
netconsole_port: 6667
perf-profile:
freq: 800
will-it-scale:
test: malloc1
testbox: wsm
tbox_group: wsm
kconfig: x86_64-rhel
enqueue_time: 2015-02-14 18:21:56.804365062 +08:00
head_commit: ea2bbe3b9bf930408db205344fe10c8f719ba738
base_commit: bfa76d49576599a4b9f9b7a71f23d73d6dcff735
branch: linux-devel/devel-hourly-2015021423
kernel: "/kernel/x86_64-rhel/ea2bbe3b9bf930408db205344fe10c8f719ba738/vmlinuz-3.19.0-gea2bbe3"
user: lkp
queue: cyclic
rootfs: debian-x86_64-2015-02-07.cgz
result_root: "/result/wsm/will-it-scale/performance-malloc1/debian-x86_64-2015-02-07.cgz/x86_64-rhel/ea2bbe3b9bf930408db205344fe10c8f719ba738/0"
job_file: "/lkp/scheduled/wsm/cyclic_will-it-scale-performance-malloc1-x86_64-rhel-HEAD-ea2bbe3b9bf930408db205344fe10c8f719ba738-0-20150214-89994-1evra14.yaml"
dequeue_time: 2015-02-15 07:22:39.683579511 +08:00
nr_cpu: "$(nproc)"
job_state: finished
loadavg: 8.39 4.93 2.03 1/157 5628
start_time: '1423956183'
end_time: '1423956487'
version: "/lkp/lkp/.src-20150213-094846"
./runtest.py malloc1 32 both 1 6 9 12
_______________________________________________
LKP mailing list
LKP@xxxxxxxxxxxxxxx