[LKP] [mutex] 871a6bb4916: -1.8% will-it-scale.per_process_ops, -98.3% will-it-scale.time.voluntary_context_switches, +209.6% will-it-scale.time.involuntary_context_switches

From: Huang Ying
Date: Sun Feb 15 2015 - 02:47:04 EST


FYI, we noticed the below changes on

git://git.kernel.org/pub/scm/linux/kernel/git/peterz/queue.git locking/core
commit 871a6bb4916fef3123b6ff749b0dc82680fb0d2a ("mutex: In mutex_spin_on_owner(), return true when owner changes")


testbox/testcase/testparams: wsm/will-it-scale/performance-writeseek3

e07e0d4cb0c4bfe8 871a6bb4916fef3123b6ff749b
---------------- --------------------------
%stddev %change %stddev
\ | \
24972759 Â 2% -98.3% 417134 Â 9% will-it-scale.time.voluntary_context_switches
2223 Â 49% +209.6% 6884 Â 10% will-it-scale.time.involuntary_context_switches
542 Â 32% +91.3% 1037 Â 0% will-it-scale.time.system_time
186 Â 30% +86.3% 347 Â 0% will-it-scale.time.percent_of_cpu_this_job_got
26.11 Â 5% -22.7% 20.18 Â 2% will-it-scale.time.user_time
0.09 Â 1% -18.2% 0.07 Â 1% will-it-scale.scalability
783528 Â 0% -1.8% 769550 Â 0% will-it-scale.per_process_ops
6038710 Â 9% -99.4% 34622 Â 29% sched_debug.cpu#8.nr_switches
354318 Â 12% -88.4% 41041 Â 3% softirqs.SCHED
1.67 Â 14% -100.0% 0.00 Â 0% perf-profile.cpu-cycles.pick_next_task_fair.__sched_text_start.schedule.schedule_preempt_disabled.cpu_startup_entry
1.82 Â 9% -100.0% 0.00 Â 0% perf-profile.cpu-cycles.dequeue_task_fair.dequeue_task.deactivate_task.__sched_text_start.schedule
2.09 Â 6% -100.0% 0.00 Â 0% perf-profile.cpu-cycles._raw_spin_lock.try_to_wake_up.wake_up_process.__mutex_unlock_slowpath.mutex_unlock
2.05 Â 9% -100.0% 0.00 Â 0% perf-profile.cpu-cycles.dequeue_task.deactivate_task.__sched_text_start.schedule.schedule_preempt_disabled
2.07 Â 9% -100.0% 0.00 Â 0% perf-profile.cpu-cycles.deactivate_task.__sched_text_start.schedule.schedule_preempt_disabled.__mutex_lock_slowpath
2.45 Â 11% -100.0% 0.00 Â 0% perf-profile.cpu-cycles.__sched_text_start.schedule.schedule_preempt_disabled.cpu_startup_entry.start_secondary
3.72 Â 3% -100.0% 0.00 Â 0% perf-profile.cpu-cycles.activate_task.ttwu_do_activate.try_to_wake_up.wake_up_process.__mutex_unlock_slowpath
1033400 Â 5% -89.4% 109500 Â 30% sched_debug.cpu#3.ttwu_count
975947 Â 2% -88.8% 109094 Â 30% sched_debug.cpu#3.sched_goidle
12.27 Â 10% +492.7% 72.73 Â 1% perf-profile.cpu-cycles.mutex_optimistic_spin.__mutex_lock_slowpath.mutex_lock.generic_file_write_iter.new_sync_write
3.22 Â 26% +1718.0% 58.50 Â 1% perf-profile.cpu-cycles.osq_lock.mutex_optimistic_spin.__mutex_lock_slowpath.mutex_lock.generic_file_write_iter
3.22 Â 10% -100.0% 0.00 Â 0% perf-profile.cpu-cycles._raw_spin_unlock_irqrestore.__hrtimer_start_range_ns.hrtimer_start_range_ns.tick_nohz_restart.tick_nohz_idle_exit
4.29 Â 9% -100.0% 0.00 Â 0% perf-profile.cpu-cycles.__sched_text_start.schedule.schedule_preempt_disabled.__mutex_lock_slowpath.mutex_lock
4.54 Â 2% -100.0% 0.00 Â 0% perf-profile.cpu-cycles.ttwu_do_activate.constprop.85.try_to_wake_up.wake_up_process.__mutex_unlock_slowpath.mutex_unlock
4.02 Â 9% -100.0% 0.00 Â 0% perf-profile.cpu-cycles.__hrtimer_start_range_ns.hrtimer_start_range_ns.tick_nohz_restart.tick_nohz_idle_exit.cpu_startup_entry
1953046 Â 2% -88.7% 219857 Â 30% sched_debug.cpu#3.sched_count
15.15 Â 2% -84.0% 2.42 Â 36% perf-profile.cpu-cycles.__mutex_unlock_slowpath.mutex_unlock.generic_file_write_iter.new_sync_write.vfs_write
4.39 Â 10% -100.0% 0.00 Â 0% perf-profile.cpu-cycles.schedule.schedule_preempt_disabled.__mutex_lock_slowpath.mutex_lock.generic_file_write_iter
4.41 Â 9% -100.0% 0.00 Â 0% perf-profile.cpu-cycles.schedule_preempt_disabled.__mutex_lock_slowpath.mutex_lock.generic_file_write_iter.new_sync_write
6.24 Â 15% -100.0% 0.00 Â 0% perf-profile.cpu-cycles.tick_nohz_stop_sched_tick.__tick_nohz_idle_enter.tick_nohz_idle_enter.cpu_startup_entry.start_secondary
6.69 Â 15% -100.0% 0.00 Â 0% perf-profile.cpu-cycles.__tick_nohz_idle_enter.tick_nohz_idle_enter.cpu_startup_entry.start_secondary
6.85 Â 15% -100.0% 0.00 Â 0% perf-profile.cpu-cycles.tick_nohz_idle_enter.cpu_startup_entry.start_secondary
1.372e+08 Â 14% -95.4% 6291721 Â 10% cpuidle.C1-NHM.time
1005704 Â 29% -97.1% 28896 Â 3% cpuidle.C1-NHM.usage
9.11e+08 Â 13% -98.8% 10867422 Â 11% cpuidle.C3-NHM.time
3887769 Â 14% -99.3% 28556 Â 8% cpuidle.C3-NHM.usage
1510725 Â 22% -95.9% 62458 Â 1% cpuidle.C6-NHM.usage
0.78 Â 32% -100.0% 0.00 Â 0% perf-profile.cpu-cycles._raw_spin_unlock_irqrestore.__hrtimer_start_range_ns.hrtimer_start.tick_nohz_stop_sched_tick.__tick_nohz_idle_enter
24972759 Â 2% -98.3% 417134 Â 9% time.voluntary_context_switches
0.94 Â 25% -100.0% 0.00 Â 0% perf-profile.cpu-cycles.hrtimer_cancel.tick_nohz_restart.tick_nohz_idle_exit.cpu_startup_entry.start_secondary
0.99 Â 19% -100.0% 0.00 Â 0% perf-profile.cpu-cycles.hrtimer_try_to_cancel.hrtimer_cancel.tick_nohz_restart.tick_nohz_idle_exit.cpu_startup_entry
20.83 Â 15% -99.3% 0.15 Â 40% turbostat.CPU%c3
2456623 Â 8% -99.2% 20817 Â 38% sched_debug.cpu#10.ttwu_count
2493452 Â 9% -99.2% 20381 Â 38% sched_debug.cpu#10.sched_goidle
4988930 Â 9% -99.2% 42017 Â 37% sched_debug.cpu#10.sched_count
4988215 Â 9% -99.2% 41816 Â 37% sched_debug.cpu#10.nr_switches
14655.48 Â 32% -100.0% 0.00 Â 0% sched_debug.cfs_rq[9]:/.max_vruntime
14655.48 Â 32% -100.0% 0.00 Â 0% sched_debug.cfs_rq[9]:/.MIN_vruntime
2644593 Â 7% -99.4% 16074 Â 33% sched_debug.cpu#9.ttwu_count
2265014 Â 15% -99.3% 15600 Â 34% sched_debug.cpu#9.sched_goidle
3222460 Â 7% -98.6% 45873 Â 27% sched_debug.cpu#0.nr_switches
4531465 Â 15% -99.3% 32789 Â 32% sched_debug.cpu#9.sched_count
3223037 Â 7% -98.6% 46060 Â 26% sched_debug.cpu#0.sched_count
1609191 Â 7% -98.7% 20811 Â 29% sched_debug.cpu#0.sched_goidle
1498187 Â 10% -98.4% 23779 Â 27% sched_debug.cpu#0.ttwu_count
4530977 Â 15% -99.3% 32616 Â 32% sched_debug.cpu#9.nr_switches
2759516 Â 3% -99.4% 17615 Â 30% sched_debug.cpu#8.ttwu_count
3018619 Â 9% -99.4% 16773 Â 29% sched_debug.cpu#8.sched_goidle
6039570 Â 9% -99.4% 34816 Â 28% sched_debug.cpu#8.sched_count
1.45 Â 11% -100.0% 0.00 Â 0% perf-profile.cpu-cycles.dequeue_entity.dequeue_task_fair.dequeue_task.deactivate_task.__sched_text_start
3261866 Â 12% -86.4% 443808 Â 24% sched_debug.cpu#1.nr_switches
2788959 Â 7% -99.5% 15010 Â 12% sched_debug.cpu#7.ttwu_count
2910769 Â 3% -99.5% 14317 Â 12% sched_debug.cpu#7.sched_goidle
5823925 Â 3% -99.5% 29602 Â 11% sched_debug.cpu#7.sched_count
5822919 Â 3% -99.5% 29393 Â 11% sched_debug.cpu#7.nr_switches
3262414 Â 12% -86.4% 443998 Â 24% sched_debug.cpu#1.sched_count
1630336 Â 12% -86.4% 221383 Â 24% sched_debug.cpu#1.sched_goidle
1590850 Â 11% -86.1% 221730 Â 24% sched_debug.cpu#1.ttwu_count
2244028 Â 15% -80.0% 448572 Â 20% sched_debug.cpu#4.sched_count
1121323 Â 15% -80.1% 223305 Â 20% sched_debug.cpu#4.sched_goidle
2727451 Â 4% -99.5% 14096 Â 5% sched_debug.cpu#6.ttwu_count
1952837 Â 2% -88.7% 219701 Â 30% sched_debug.cpu#3.nr_switches
9443 Â 24% +510.5% 57651 Â 20% sched_debug.cfs_rq[5]:/.exec_clock
5386675 Â 7% -99.5% 28321 Â 5% sched_debug.cpu#6.sched_count
5385896 Â 7% -99.5% 28079 Â 5% sched_debug.cpu#6.nr_switches
2243723 Â 15% -80.0% 448446 Â 20% sched_debug.cpu#4.nr_switches
58333 Â 26% +800.2% 525096 Â 19% sched_debug.cfs_rq[5]:/.min_vruntime
2692398 Â 7% -99.5% 13778 Â 5% sched_debug.cpu#6.sched_goidle
15.29 Â 2% -83.6% 2.51 Â 34% perf-profile.cpu-cycles.mutex_unlock.generic_file_write_iter.new_sync_write.vfs_write.sys_write
12 Â 25% +431.2% 63 Â 11% sched_debug.cpu#5.cpu_load[4]
12 Â 25% +278.0% 47 Â 29% sched_debug.cpu#4.cpu_load[4]
14 Â 20% +356.1% 65 Â 11% sched_debug.cpu#5.cpu_load[3]
88847 Â 19% +279.8% 337399 Â 37% sched_debug.cfs_rq[4]:/.min_vruntime
1031731 Â 16% -78.4% 223086 Â 20% sched_debug.cpu#4.ttwu_count
99804 Â 24% +338.8% 437987 Â 19% sched_debug.cfs_rq[3]:/.min_vruntime
2223 Â 49% +209.6% 6884 Â 10% time.involuntary_context_switches
17.87 Â 5% +308.0% 72.92 Â 1% perf-profile.cpu-cycles.__mutex_lock_slowpath.mutex_lock.generic_file_write_iter.new_sync_write.vfs_write
18 Â 18% +260.0% 67 Â 12% sched_debug.cpu#5.cpu_load[2]
6968 Â 20% +278.9% 26401 Â 9% sched_debug.cfs_rq[5]:/.avg->runnable_avg_sum
151 Â 20% +279.4% 575 Â 9% sched_debug.cfs_rq[5]:/.tg_runnable_contrib
144391 Â 29% +340.5% 635982 Â 10% sched_debug.cfs_rq[2]:/.min_vruntime
13 Â 27% +250.9% 48 Â 28% sched_debug.cpu#4.cpu_load[3]
13600 Â 14% +165.3% 36078 Â 44% sched_debug.cfs_rq[4]:/.exec_clock
14705 Â 16% +210.9% 45712 Â 23% sched_debug.cfs_rq[3]:/.exec_clock
21.26 Â 3% +249.9% 74.39 Â 1% perf-profile.cpu-cycles.mutex_lock.generic_file_write_iter.new_sync_write.vfs_write.sys_write
19 Â 32% +265.4% 71 Â 12% sched_debug.cpu#2.cpu_load[4]
146181 Â 22% +276.4% 550280 Â 7% sched_debug.cfs_rq[1]:/.min_vruntime
14 Â 19% +250.8% 51 Â 21% sched_debug.cpu#3.cpu_load[4]
15 Â 32% +215.9% 49 Â 27% sched_debug.cpu#4.cpu_load[2]
22075 Â 18% +206.6% 67680 Â 13% sched_debug.cfs_rq[2]:/.exec_clock
21 Â 25% +228.7% 71 Â 12% sched_debug.cpu#2.cpu_load[3]
30386 Â 36% +127.5% 69129 Â 9% sched_debug.cpu#5.nr_load_updates
211200 Â 15% +164.8% 559266 Â 24% sched_debug.cfs_rq[10]:/.min_vruntime
26 Â 19% +170.5% 71 Â 15% sched_debug.cpu#5.cpu_load[1]
17 Â 11% +200.0% 51 Â 21% sched_debug.cpu#3.cpu_load[3]
24 Â 24% +193.8% 71 Â 13% sched_debug.cpu#2.cpu_load[2]
115343 Â 5% -64.3% 41124 Â 5% softirqs.RCU
1683 Â 8% -62.1% 638 Â 6% cpuidle.POLL.usage
8658 Â 15% +144.1% 21138 Â 20% sched_debug.cfs_rq[4]:/.avg->runnable_avg_sum
189 Â 15% +144.4% 462 Â 20% sched_debug.cfs_rq[4]:/.tg_runnable_contrib
256 Â 29% +159.6% 666 Â 8% sched_debug.cfs_rq[2]:/.tg_runnable_contrib
11739 Â 29% +159.9% 30515 Â 8% sched_debug.cfs_rq[2]:/.avg->runnable_avg_sum
29 Â 14% +110.3% 61 Â 21% sched_debug.cpu#10.cpu_load[4]
336086 Â 15% +150.3% 841284 Â 6% sched_debug.cfs_rq[6]:/.min_vruntime
969 Â 40% +109.0% 2025 Â 9% sched_debug.cpu#1.curr->pid
21 Â 5% +145.2% 51 Â 20% sched_debug.cpu#3.cpu_load[2]
202 Â 13% +155.3% 515 Â 12% sched_debug.cfs_rq[3]:/.tg_runnable_contrib
9249 Â 13% +155.4% 23619 Â 12% sched_debug.cfs_rq[3]:/.avg->runnable_avg_sum
739 Â 21% +150.6% 1853 Â 22% sched_debug.cpu#4.curr->pid
202942 Â 11% +125.2% 456974 Â 16% sched_debug.cfs_rq[9]:/.min_vruntime
970 Â 6% -55.7% 430 Â 45% sched_debug.cpu#6.ttwu_local
19 Â 37% +167.9% 52 Â 26% sched_debug.cpu#4.cpu_load[1]
452477 Â 19% +140.5% 1088065 Â 0% softirqs.TIMER
22472 Â 11% +148.8% 55914 Â 7% sched_debug.cfs_rq[1]:/.exec_clock
36 Â 27% +106.9% 75 Â 22% sched_debug.cfs_rq[5]:/.runnable_load_avg
1.52 Â 7% -53.9% 0.70 Â 7% perf-profile.cpu-cycles.system_call_after_swapgs
32 Â 9% +94.5% 62 Â 20% sched_debug.cpu#10.cpu_load[3]
21 Â 34% +183.3% 59 Â 9% sched_debug.cpu#1.cpu_load[4]
542 Â 32% +91.3% 1037 Â 0% time.system_time
1.01 Â 8% -52.1% 0.48 Â 3% perf-profile.cpu-cycles.__sb_end_write.vfs_write.sys_write.system_call_fastpath
1.44 Â 16% -57.3% 0.61 Â 8% perf-profile.cpu-cycles.sys_lseek.system_call_fastpath
241726 Â 20% +161.9% 633158 Â 4% sched_debug.cfs_rq[7]:/.min_vruntime
29 Â 26% +133.6% 69 Â 15% sched_debug.cpu#2.cpu_load[1]
186 Â 30% +86.3% 347 Â 0% time.percent_of_cpu_this_job_got
23 Â 15% +151.1% 57 Â 30% sched_debug.cfs_rq[4]:/.runnable_load_avg
46 Â 24% +144.9% 114 Â 21% sched_debug.cpu#6.cpu_load[3]
41 Â 20% +82.6% 76 Â 28% sched_debug.cpu#11.cpu_load[0]
893 Â 11% +122.5% 1986 Â 4% sched_debug.cpu#10.curr->pid
50 Â 28% +148.0% 124 Â 22% sched_debug.cpu#6.cpu_load[4]
47 Â 23% +131.4% 108 Â 17% sched_debug.cpu#6.cpu_load[2]
1.26 Â 7% -54.5% 0.57 Â 8% perf-profile.cpu-cycles.__sb_start_write.vfs_write.sys_write.system_call_fastpath
3.00 Â 8% -52.7% 1.42 Â 4% perf-profile.cpu-cycles.shmem_getpage_gfp.shmem_write_begin.generic_perform_write.__generic_file_write_iter.generic_file_write_iter
31 Â 36% +130.2% 72 Â 7% sched_debug.cpu#7.cpu_load[4]
3.37 Â 7% -51.4% 1.64 Â 1% perf-profile.cpu-cycles.shmem_write_begin.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.new_sync_write
41 Â 27% +90.2% 78 Â 21% sched_debug.cpu#5.cpu_load[0]
1152 Â 36% +91.5% 2207 Â 9% sched_debug.cpu#7.curr->pid
32766 Â 7% +78.3% 58423 Â 27% sched_debug.cfs_rq[10]:/.exec_clock
2.42 Â 13% -54.2% 1.11 Â 32% perf-profile.cpu-cycles._raw_spin_lock.__mutex_unlock_slowpath.mutex_unlock.generic_file_write_iter.new_sync_write
22 Â 32% +164.4% 59 Â 8% sched_debug.cpu#1.cpu_load[3]
998 Â 21% +108.6% 2081 Â 8% sched_debug.cpu#8.curr->pid
1.43 Â 8% -49.6% 0.72 Â 10% perf-profile.cpu-cycles.unlock_page.shmem_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter
27 Â 9% +85.5% 51 Â 19% sched_debug.cpu#3.cpu_load[1]
51 Â 20% +106.3% 105 Â 12% sched_debug.cpu#6.cpu_load[1]
2.20 Â 8% -51.8% 1.06 Â 6% perf-profile.cpu-cycles.find_lock_entry.shmem_getpage_gfp.shmem_write_begin.generic_perform_write.__generic_file_write_iter
7.46 Â 4% -51.1% 3.65 Â 6% perf-profile.cpu-cycles.copy_user_generic_string.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.new_sync_write
2.79 Â 3% -48.3% 1.44 Â 7% perf-profile.cpu-cycles.fsnotify.vfs_write.sys_write.system_call_fastpath
327 Â 17% +75.1% 573 Â 15% sched_debug.cfs_rq[10]:/.tg_runnable_contrib
14967 Â 17% +75.2% 26221 Â 15% sched_debug.cfs_rq[10]:/.avg->runnable_avg_sum
2 Â 15% +109.1% 5 Â 7% vmstat.procs.r
24 Â 25% +145.5% 60 Â 6% sched_debug.cpu#1.cpu_load[1]
30 Â 21% +99.2% 59 Â 13% sched_debug.cpu#8.cpu_load[4]
23 Â 29% +151.6% 59 Â 7% sched_debug.cpu#1.cpu_load[2]
19.91 Â 3% -48.4% 10.27 Â 3% perf-profile.cpu-cycles.__generic_file_write_iter.generic_file_write_iter.new_sync_write.vfs_write.sys_write
17.33 Â 4% -48.1% 8.99 Â 3% perf-profile.cpu-cycles.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.new_sync_write.vfs_write
3.35 Â 2% -46.9% 1.78 Â 16% perf-profile.cpu-cycles.shmem_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.new_sync_write
29 Â 29% +85.7% 55 Â 17% sched_debug.cpu#9.cpu_load[4]
1.18 Â 9% -51.8% 0.57 Â 11% perf-profile.cpu-cycles.find_get_entry.find_lock_entry.shmem_getpage_gfp.shmem_write_begin.generic_perform_write
1.31 Â 5% -51.0% 0.65 Â 7% perf-profile.cpu-cycles.system_call
12371 Â 18% +118.5% 27032 Â 6% sched_debug.cfs_rq[1]:/.avg->runnable_avg_sum
244456 Â 15% +123.7% 546812 Â 14% sched_debug.cfs_rq[8]:/.min_vruntime
271 Â 18% +118.0% 590 Â 6% sched_debug.cfs_rq[1]:/.tg_runnable_contrib
36 Â 9% +78.5% 64 Â 18% sched_debug.cpu#10.cpu_load[2]
35 Â 38% +103.5% 72 Â 6% sched_debug.cpu#7.cpu_load[3]
31 Â 19% +90.5% 60 Â 13% sched_debug.cpu#8.cpu_load[3]
3694 Â 15% +94.6% 7190 Â 0% sched_debug.cfs_rq[0]:/.tg->runnable_avg
3694 Â 15% +94.4% 7183 Â 0% sched_debug.cfs_rq[1]:/.tg->runnable_avg
37 Â 22% +73.8% 64 Â 5% sched_debug.cfs_rq[1]:/.runnable_load_avg
3695 Â 15% +93.8% 7161 Â 0% sched_debug.cfs_rq[2]:/.tg->runnable_avg
3696 Â 15% +93.7% 7162 Â 0% sched_debug.cfs_rq[3]:/.tg->runnable_avg
3701 Â 15% +93.6% 7165 Â 0% sched_debug.cfs_rq[5]:/.tg->runnable_avg
3698 Â 15% +93.7% 7162 Â 0% sched_debug.cfs_rq[4]:/.tg->runnable_avg
3703 Â 15% +93.5% 7166 Â 0% sched_debug.cfs_rq[6]:/.tg->runnable_avg
3711 Â 15% +93.3% 7173 Â 0% sched_debug.cfs_rq[9]:/.tg->runnable_avg
3711 Â 15% +93.2% 7169 Â 0% sched_debug.cfs_rq[8]:/.tg->runnable_avg
3707 Â 15% +93.3% 7167 Â 0% sched_debug.cfs_rq[7]:/.tg->runnable_avg
3714 Â 15% +93.1% 7174 Â 0% sched_debug.cfs_rq[10]:/.tg->runnable_avg
3717 Â 15% +93.1% 7177 Â 0% sched_debug.cfs_rq[11]:/.tg->runnable_avg
457 Â 16% +80.3% 824 Â 4% sched_debug.cfs_rq[6]:/.tg_runnable_contrib
327 Â 19% +63.5% 535 Â 15% sched_debug.cfs_rq[9]:/.tg_runnable_contrib
5.21 Â 8% +94.9% 10.14 Â 1% perf-profile.cpu-cycles.mutex_spin_on_owner.mutex_optimistic_spin.__mutex_lock_slowpath.mutex_lock.generic_file_write_iter
20973 Â 16% +80.4% 37832 Â 4% sched_debug.cfs_rq[6]:/.avg->runnable_avg_sum
14981 Â 19% +63.9% 24548 Â 15% sched_debug.cfs_rq[9]:/.avg->runnable_avg_sum
36 Â 20% +38.8% 51 Â 19% sched_debug.cpu#3.cpu_load[0]
1052 Â 22% +68.3% 1770 Â 15% sched_debug.cpu#3.curr->pid
41 Â 42% +78.7% 73 Â 5% sched_debug.cpu#7.cpu_load[2]
42 Â 33% +63.9% 69 Â 19% sched_debug.cfs_rq[2]:/.runnable_load_avg
28 Â 25% +121.4% 62 Â 5% sched_debug.cpu#1.cpu_load[0]
2973 Â 6% -41.9% 1727 Â 0% uptime.idle
1370 Â 10% +68.7% 2311 Â 9% sched_debug.cpu#6.curr->pid
301194 Â 6% +73.9% 523791 Â 10% sched_debug.cpu#11.avg_idle
32 Â 14% +83.8% 59 Â 13% sched_debug.cpu#8.cpu_load[2]
234 Â 30% +45.9% 341 Â 13% sched_debug.cfs_rq[2]:/.tg_load_contrib
41 Â 17% +64.1% 68 Â 17% sched_debug.cpu#10.cpu_load[1]
61684 Â 20% +85.4% 114380 Â 9% sched_debug.cfs_rq[6]:/.exec_clock
315489 Â 24% +83.4% 578759 Â 10% sched_debug.cfs_rq[0]:/.min_vruntime
1.474e+08 Â 8% -42.1% 85363914 Â 6% cpuidle.C1E-NHM.time
1.15 Â 4% -39.4% 0.70 Â 12% perf-profile.cpu-cycles.__srcu_read_lock.fsnotify.vfs_write.sys_write.system_call_fastpath
28 Â 40% +105.4% 57 Â 27% sched_debug.cpu#4.cpu_load[0]
35 Â 14% +68.8% 59 Â 13% sched_debug.cpu#8.cpu_load[1]
31 Â 12% +54.0% 47 Â 13% sched_debug.cpu#11.cpu_load[4]
59 Â 22% +75.8% 103 Â 14% sched_debug.cpu#6.cpu_load[0]
1137 Â 25% +92.1% 2184 Â 14% sched_debug.cpu#5.curr->pid
32 Â 27% +72.5% 56 Â 16% sched_debug.cpu#9.cpu_load[3]
264073 Â 5% +49.3% 394342 Â 13% sched_debug.cpu#8.avg_idle
37082 Â 12% +79.2% 66466 Â 6% sched_debug.cfs_rq[7]:/.exec_clock
39 Â 22% +46.8% 58 Â 14% sched_debug.cpu#8.cpu_load[0]
31977 Â 5% +51.3% 48388 Â 22% sched_debug.cfs_rq[9]:/.exec_clock
17401 Â 15% +79.1% 31162 Â 4% sched_debug.cfs_rq[7]:/.avg->runnable_avg_sum
380 Â 15% +78.6% 679 Â 4% sched_debug.cfs_rq[7]:/.tg_runnable_contrib
40 Â 13% +72.4% 70 Â 17% sched_debug.cfs_rq[10]:/.runnable_load_avg
37 Â 27% +58.8% 58 Â 9% sched_debug.cfs_rq[8]:/.runnable_load_avg
38761708 Â 33% +61.8% 62732448 Â 7% cpuidle.POLL.time
1178 Â 19% +83.1% 2158 Â 5% sched_debug.cpu#0.curr->pid
1220 Â 4% +55.6% 1899 Â 11% sched_debug.cpu#11.curr->pid
57975 Â 9% +40.6% 81536 Â 8% sched_debug.cpu#2.nr_load_updates
166493 Â 16% -41.6% 97254 Â 35% sched_debug.cpu#3.ttwu_local
75051 Â 6% -45.2% 41162 Â 24% sched_debug.cpu#11.nr_load_updates
57.44 Â 2% +52.6% 87.64 Â 1% perf-profile.cpu-cycles.generic_file_write_iter.new_sync_write.vfs_write.sys_write.system_call_fastpath
35 Â 13% +57.4% 55 Â 10% sched_debug.cpu#11.cpu_load[2]
293147 Â 5% +41.2% 413910 Â 19% sched_debug.cpu#10.avg_idle
32 Â 12% +55.7% 51 Â 9% sched_debug.cpu#11.cpu_load[3]
38 Â 16% +66.7% 63 Â 18% sched_debug.cpu#11.cpu_load[1]
16272 Â 14% +32.4% 21550 Â 13% sched_debug.cfs_rq[11]:/.avg->runnable_avg_sum
356 Â 14% +32.1% 471 Â 13% sched_debug.cfs_rq[11]:/.tg_runnable_contrib
2315 Â 1% +48.7% 3444 Â 1% proc-vmstat.pgactivate
61.08 Â 2% +47.4% 90.04 Â 2% perf-profile.cpu-cycles.new_sync_write.vfs_write.sys_write.system_call_fastpath
46 Â 26% +66.3% 76 Â 18% sched_debug.cpu#10.cpu_load[0]
395145 Â 15% -28.1% 283954 Â 15% sched_debug.cpu#2.avg_idle
16930 Â 13% +62.2% 27468 Â 8% sched_debug.cfs_rq[8]:/.avg->runnable_avg_sum
370 Â 13% +62.0% 600 Â 8% sched_debug.cfs_rq[8]:/.tg_runnable_contrib
1111 Â 35% +81.5% 2016 Â 16% sched_debug.cpu#2.curr->pid
53 Â 24% +72.0% 92 Â 13% sched_debug.cfs_rq[6]:/.runnable_load_avg
1264 Â 16% +50.8% 1906 Â 1% sched_debug.cpu#9.curr->pid
76679 Â 3% -26.1% 56701 Â 15% sched_debug.cpu#8.nr_load_updates
32.91 Â 5% -26.8% 24.09 Â 1% turbostat.CPU%c1
18589 Â 18% +64.6% 30603 Â 8% sched_debug.cfs_rq[0]:/.avg->runnable_avg_sum
405 Â 18% +64.4% 667 Â 8% sched_debug.cfs_rq[0]:/.tg_runnable_contrib
37668 Â 8% +44.7% 54510 Â 16% sched_debug.cfs_rq[8]:/.exec_clock
68.80 Â 2% +36.2% 93.71 Â 2% perf-profile.cpu-cycles.vfs_write.sys_write.system_call_fastpath
70.12 Â 2% +34.5% 94.33 Â 1% perf-profile.cpu-cycles.sys_write.system_call_fastpath
26.11 Â 5% -22.7% 20.18 Â 2% time.user_time
71.98 Â 2% +32.2% 95.16 Â 1% perf-profile.cpu-cycles.system_call_fastpath
1191441 Â 3% -22.4% 924579 Â 0% cpuidle.C1E-NHM.usage
2.82 Â 5% -27.8% 2.04 Â 12% perf-profile.cpu-cycles.mutex_unlock.new_sync_write.vfs_write.sys_write.system_call_fastpath
2741 Â 0% +27.5% 3496 Â 0% proc-vmstat.nr_shmem
10969 Â 0% +27.5% 13987 Â 0% meminfo.Shmem
67072 Â 3% -25.0% 50333 Â 21% sched_debug.cpu#9.nr_load_updates
253442 Â 8% +29.3% 327656 Â 10% sched_debug.cpu#7.avg_idle
61 Â 6% +13.9% 69 Â 1% turbostat.CoreTmp
121 Â 6% -18.5% 99 Â 23% sched_debug.cfs_rq[0]:/.load
76604 Â 3% -11.6% 67691 Â 6% sched_debug.cpu#7.nr_load_updates
23935 Â 3% +12.8% 26987 Â 2% meminfo.Active(anon)
5982 Â 3% +12.8% 6745 Â 2% proc-vmstat.nr_active_anon
63339 Â 10% +13.7% 72028 Â 4% sched_debug.cpu#1.nr_load_updates
69405 Â 3% -7.2% 64386 Â 4% meminfo.DirectMap4k
401667 Â 2% -97.0% 11907 Â 3% vmstat.system.cs
29.02 Â 16% +106.1% 59.81 Â 0% turbostat.%Busy
1022 Â 16% +105.8% 2103 Â 0% turbostat.Avg_MHz
7739 Â 9% +34.9% 10437 Â 0% vmstat.system.in

wsm: Westmere
Memory: 6G




time.voluntary_context_switches

3e+07 ++------------------------*--*------------------------------------+
*.. : + * |
2.5e+07 ++ .*..*.*.. : + .*..*.*.. .. + .*. .*.*..*
| *.*. *.*..*..* *.*. * *. *..*. |
| |
2e+07 ++ |
| |
1.5e+07 ++ |
| |
1e+07 ++ |
| |
| |
5e+06 ++ |
| |
0 O+-O-O--O--O-O--O-O--O--O-O--O--O-O--O--O-O--O--O-O--O------------+


softirqs.SCHED

450000 ++-----------------------------------------------------------------+
| *.. * |
400000 ++ : .* : : *
350000 ++ : *. + *.. : : ..|
| .*..*..* *..*.. .*..*. + : * |
300000 *+.* *.*..*. *..*..* *..*.*..* |
250000 ++ |
| |
200000 ++ |
150000 ++ |
| |
100000 ++ |
50000 ++ |
O O O O O O O O O O O O O O O O O O O O O |
0 ++-----------------------------------------------------------------+


softirqs.HRTIMER

12000 ++------------------------------------------------------------------+
| .*.*..*..* |
10000 ++ *. .*..*. : |
| .* .. *. : *.. *..*.. |
*. + .*..*.. * : : *.. .. *.*..*
8000 ++ *. + : : *.* |
| *. + * |
6000 ++ * |
| |
4000 ++ |
| |
| |
2000 ++ |
| |
0 O+-O-O--O--O--O-O--O--O-O--O--O--O-O--O--O--O-O--O--O-O-------------+


softirqs.RCU

130000 ++-----------------------------------------------------------------+
120000 ++ *..|
| .*. .*..* *.*..*..*. .*..*..*.*..*.. *..*.. + *
110000 *+ *. + + *. *.*.. + * |
100000 ++ *.. + *..* |
| * |
90000 ++ |
80000 ++ |
70000 ++ |
| |
60000 ++ |
50000 ++ |
| O O O |
40000 O+ O O O O O O O O O O O O O O O O O |
30000 ++-----------------------------------------------------------------+


will-it-scale.time.voluntary_context_switches

3e+07 ++------------------------*--*------------------------------------+
*.. : + * |
2.5e+07 ++ .*..*.*.. : + .*..*.*.. .. + .*. .*.*..*
| *.*. *.*..*..* *.*. * *. *..*. |
| |
2e+07 ++ |
| |
1.5e+07 ++ |
| |
1e+07 ++ |
| |
| |
5e+06 ++ |
| |
0 O+-O-O--O--O-O--O-O--O--O-O--O--O-O--O--O-O--O--O-O--O------------+


vmstat.system.cs

450000 ++-----------------------------------------------------------------+
*.. .*.*.. *.. .*.*..|
400000 ++ * *..* *.*..*. *..*..*.*.. *.*.. : *. *
350000 ++ + .. + : .. *.. : |
| * *.. : * * |
300000 ++ : |
250000 ++ * |
| |
200000 ++ |
150000 ++ |
| |
100000 ++ |
50000 ++ |
| |
0 O+-O-O--O--O-O--O--O-O--O--O-O--O--O--O-O--O--O-O--O--O------------+


sched_debug.cpu#0.nr_switches

4e+06 ++----------------------------------------------------------------+
| * |
3.5e+06 ++ .. + |
3e+06 *+. *.. *..* *..|
| * *..*. .*.. + *..*.*..*..*. .*..*.*.. + *
2.5e+06 ++ + .. *.. *. * *. * |
| * + |
2e+06 ++ * |
| |
1.5e+06 ++ |
1e+06 ++ |
| |
500000 ++ |
| |
0 O+-O-O--O--O-O--O-O--O--O-O--O--O-O--O--O-O--O--O-O--O------------+


sched_debug.cpu#0.sched_count

4e+06 ++----------------------------------------------------------------+
| * |
3.5e+06 ++ .. + |
3e+06 *+. *.. *..* *..|
| * *..*. .*.. + *..*.*..*..*. .*..*.*.. + *
2.5e+06 ++ + .. *.. *. * *. * |
| * + |
2e+06 ++ * |
| |
1.5e+06 ++ |
1e+06 ++ |
| |
500000 ++ |
| |
0 O+-O-O--O--O-O--O-O--O--O-O--O--O-O--O--O-O--O--O-O--O------------+


sched_debug.cpu#0.sched_goidle

1.8e+06 ++-----------------------------------------------------------*----+
| .. *..|
1.6e+06 *+. *..*.. *..* |
1.4e+06 ++ * *..*. .*.. + *.*..*..*. .*..*.*.. + *
| + .. *.. *. * *. * |
1.2e+06 ++ * + |
1e+06 ++ * |
| |
800000 ++ |
600000 ++ |
| |
400000 ++ |
200000 ++ |
| |
0 O+-O-O--O--O-O--O-O--O--O-O--O--O-O--O--O-O--O--O-O--O------------+


sched_debug.cpu#0.ttwu_count

1.8e+06 ++----------------------------------------------------------------+
| * *. |
1.6e+06 ++ *.. : + .. *..|
1.4e+06 *+. *.. .*.. + *.. .*..*.. : *.. *..* |
| * + *.*.. *. * * *.*.. : + *
1.2e+06 ++ + + + * * |
1e+06 ++ * * |
| |
800000 ++ |
600000 ++ |
| |
400000 ++ |
200000 ++ |
| |
0 O+-O-O--O--O-O--O-O--O--O-O--O--O-O--O--O-O--O--O-O--O------------+


sched_debug.cpu#6.nr_switches

7e+06 ++------------------------------------------------------------------+
| |
6e+06 ++ * *..*.. |
| .* + : .*..*.. .* : *. |
5e+06 *+ : + : *..*.*. *.*..*..*..* + : *..*
| : .* : .. + : |
4e+06 ++ *. : .* *..*. : |
| * * |
3e+06 ++ |
| |
2e+06 ++ |
| |
1e+06 ++ |
| |
0 O+-O-O--O--O--O-O--O--O-O--O--O--O-O--O--O--O-O--O--O-O-------------+


sched_debug.cpu#6.sched_count

7e+06 ++------------------------------------------------------------------+
| |
6e+06 ++ * *..*.. |
| .* + : .*..*.. .* : *. |
5e+06 *+ : + : *..*.*. *.*..*..*..* + : *..*
| : .* : .. + : |
4e+06 ++ *. : .* *..*. : |
| * * |
3e+06 ++ |
| |
2e+06 ++ |
| |
1e+06 ++ |
| |
0 O+-O-O--O--O--O-O--O--O-O--O--O--O-O--O--O--O-O--O--O-O-------------+


sched_debug.cpu#6.sched_goidle

3.5e+06 ++----------------------------------------------------------------+
| |
3e+06 ++ * *..*.. |
| .* +: .*..*.. .* : *. |
2.5e+06 *+ : + : *..*..* *.*..*..*.*. + : *..*
| : .* : + + : |
2e+06 ++ *. : .* *.*..: |
| *. * |
1.5e+06 ++ |
| |
1e+06 ++ |
| |
500000 ++ |
| |
0 O+-O-O--O--O-O--O-O--O--O-O--O--O-O--O--O-O--O--O-O--O------------+


sched_debug.cpu#6.ttwu_count

3e+06 ++----------------------------------------------------------------+
| .* .*. .*. *.. .*.*..*
2.5e+06 *+. *. : *..*. *..*..*.*..*. *..*.. : *. |
| * + : : *. : |
| + + : : *.. : |
2e+06 ++ * *..* : |
| * |
1.5e+06 ++ |
| |
1e+06 ++ |
| |
| |
500000 ++ |
| |
0 O+-O-O--O--O-O--O-O--O--O-O--O--O-O--O--O-O--O--O-O--O------------+


sched_debug.cpu#7.nr_switches

7e+06 ++------------------------------------------------------------------+
| * |
6e+06 ++ .. + .*.. .*.. .*
*..* *..* * *..*. *.*..*..*..*. .*.. *. *.*. |
5e+06 ++ : + + + *. *. .. |
| : + + + * |
4e+06 ++ * *.* |
| |
3e+06 ++ |
| |
2e+06 ++ |
| |
1e+06 ++ |
| |
0 O+-O-O--O--O--O-O--O--O-O--O--O--O-O--O--O--O-O--O--O-O-------------+


sched_debug.cpu#7.sched_count

7e+06 ++------------------------------------------------------------------+
| * |
6e+06 ++ .. + .*.. .*.. .*
*..* *..* * *..*. *.*..*..*..*. .*.. *. *.*. |
5e+06 ++ : + + + *. *. .. |
| : + + + * |
4e+06 ++ * *.* |
| |
3e+06 ++ |
| |
2e+06 ++ |
| |
1e+06 ++ |
| |
0 O+-O-O--O--O--O-O--O--O-O--O--O--O-O--O--O--O-O--O--O-O-------------+


sched_debug.cpu#7.sched_goidle

3.5e+06 ++----------------------------------------------------------------+
| *.. |
3e+06 ++ .. .*.. .*.. .*
*..* *..* * *.*. *.*..*..*.*.. .*. *. *.*. |
2.5e+06 ++ : + : : *. *.. + |
| : + : : * |
2e+06 ++ * *..* |
| |
1.5e+06 ++ |
| |
1e+06 ++ |
| |
500000 ++ |
| |
0 O+-O-O--O--O-O--O-O--O--O-O--O--O-O--O--O-O--O--O-O--O------------+


sched_debug.cpu#7.ttwu_count

3.5e+06 ++----------------------------------------------------------------+
| |
3e+06 ++.* .* *. *. .*
*. : *..* .* + + *.. .*.. .. *. |
2.5e+06 ++ : + : *..*. + + *..* *..* .*..* |
| :+ : + * + .* |
2e+06 ++ * *..* *. |
| |
1.5e+06 ++ |
| |
1e+06 ++ |
| |
500000 ++ |
| |
0 O+-O-O--O--O-O--O-O--O--O-O--O--O-O--O--O-O--O--O-O--O------------+


sched_debug.cpu#8.nr_switches

7e+06 ++------------------------------------------------------------------+
| * * .* *..|
6e+06 ++ : + .*.. .. + .*.. *.. *. + : *
*.. : + .*.*. * *. *.. + * .. + : |
5e+06 ++ * : *.. .*. * + * * |
| + : .*. + + |
4e+06 ++ * * * |
| |
3e+06 ++ |
| |
2e+06 ++ |
| |
1e+06 ++ |
| |
0 O+-O-O--O--O--O-O--O--O-O--O--O--O-O--O--O--O-O--O--O-O-------------+


sched_debug.cpu#8.sched_count

7e+06 ++------------------------------------------------------------------+
| * * .* *..|
6e+06 ++ : + .*.. .. + .*.. *.. *. + : *
*.. : + .*.*. * *. *.. + * .. + : |
5e+06 ++ * : *.. .*. * + * * |
| + : .*. + + |
4e+06 ++ * * * |
| |
3e+06 ++ |
| |
2e+06 ++ |
| |
1e+06 ++ |
| |
0 O+-O-O--O--O--O-O--O--O-O--O--O--O-O--O--O--O-O--O--O-O-------------+


sched_debug.cpu#8.sched_goidle

3.5e+06 ++----------------------------------------------------------------+
| * * .* *..|
3e+06 ++ : + .*.. .. + .*.. *.. *. + : *
*.. : + .*..* * *. *. .. * + + : |
2.5e+06 ++ * : * .*. * : * * |
| + : + .* : .. |
2e+06 ++ * *. * |
| |
1.5e+06 ++ |
| |
1e+06 ++ |
| |
500000 ++ |
| |
0 O+-O-O--O--O-O--O-O--O--O-O--O--O-O--O--O-O--O--O-O--O------------+


sched_debug.cpu#8.ttwu_count

3.5e+06 ++----------------------------------------------------------------+
| |
3e+06 ++ .*.. *.. |
*.. *..*.*. .*..*..*. : *..*.*..*
2.5e+06 ++ *.. .. * *.. .* : |
| *. .. *. .*.* *. : * |
2e+06 ++ * *. : .. |
| * |
1.5e+06 ++ |
| |
1e+06 ++ |
| |
500000 ++ |
| |
0 O+-O-O--O--O-O--O-O--O--O-O--O--O-O--O--O-O--O--O-O--O------------+


sched_debug.cpu#9.nr_switches

7e+06 ++------------------------------------------------------------------+
| |
6e+06 *+ *.. * * |
|+ * + *.. : : : : * |
5e+06 +++ .. : * : : : : + : *
| *. .* : *.. .. *..*. .*..*..*. : : : : + : ..|
4e+06 ++ *. : : * *. * : : * * |
| : : :.* |
3e+06 ++ * * |
| |
2e+06 ++ |
| |
1e+06 ++ |
| |
0 O+-O-O--O--O--O-O--O--O-O--O--O--O-O--O--O--O-O--O--O-O-------------+


sched_debug.cpu#9.sched_count

7e+06 ++------------------------------------------------------------------+
| |
6e+06 *+ *.. * * |
|+ * + *.. : : : : * |
5e+06 +++ .. : * : : : : + : *
| *. .* : *.. .. *..*. .*..*..*. : : : : + : ..|
4e+06 ++ *. : : * *. * : : * * |
| : : :.* |
3e+06 ++ * * |
| |
2e+06 ++ |
| |
1e+06 ++ |
| |
0 O+-O-O--O--O--O-O--O--O-O--O--O--O-O--O--O--O-O--O--O-O-------------+


sched_debug.cpu#9.sched_goidle

3.5e+06 ++----------------------------------------------------------------+
| |
3e+06 *+ *. * * |
|+ * .. *.. :: :: * |
2.5e+06 +++ ..: * : : : : + : *
| *. .* : *. .. *..*. .*..*.*.. : : : : + : ..|
2e+06 ++ *. : + * *. * : : * * |
| :+ : .* |
1.5e+06 ++ * *. |
| |
1e+06 ++ |
| |
500000 ++ |
| |
0 O+-O-O--O--O-O--O-O--O--O-O--O--O-O--O--O-O--O--O-O--O------------+


sched_debug.cpu#9.ttwu_count

3.5e+06 ++----------------------------------------------------------------+
| |
3e+06 ++ *.. *.. *..*.. |
*..* .* + *..* + * : *.*..|
2.5e+06 ++ : *. : *.. .* + + + .*..* : |
| : .. : + *. * *. + : *
2e+06 ++ * : * *..* |
| :.. |
1.5e+06 ++ * |
| |
1e+06 ++ |
| |
500000 ++ |
| |
0 O+-O-O--O--O-O--O-O--O--O-O--O--O-O--O--O-O--O--O-O--O------------+


sched_debug.cpu#10.nr_switches

7e+06 ++------------------------------------------------------------------+
| |
6e+06 ++ *.. |
*.. : *.. *.. .*.* |
5e+06 ++ .* *.. : .*.*..*.. + *. + |
| *. .*. : .. * *. *..*. + +|
4e+06 ++ *. : * *..*..*.* *
| : + |
3e+06 ++ * |
| |
2e+06 ++ |
| |
1e+06 ++ |
| |
0 O+-O-O--O--O--O-O--O--O-O--O--O--O-O--O--O--O-O--O--O-O-------------+


sched_debug.cpu#10.sched_count

7e+06 ++------------------------------------------------------------------+
| |
6e+06 ++ *.. |
*.. : *.. *.. .*.* |
5e+06 ++ .* *.. : .*.*..*.. + *. + |
| *. .*. : .. * *. *..*. + +|
4e+06 ++ *. : * *..*..*.* *
| : + |
3e+06 ++ * |
| |
2e+06 ++ |
| |
1e+06 ++ |
| |
0 O+-O-O--O--O--O-O--O--O-O--O--O--O-O--O--O--O-O--O--O-O-------------+


sched_debug.cpu#10.sched_goidle

3.5e+06 ++----------------------------------------------------------------+
| |
3e+06 ++ *. |
*.. + *.. *.. .*.* |
2.5e+06 ++ .* *.. + .*.*..*.. : *. + |
| *. .*. : + * *. *.*.. : +|
2e+06 ++ *. : * *..*.*..* *
| :.. |
1.5e+06 ++ * |
| |
1e+06 ++ |
| |
500000 ++ |
| |
0 O+-O-O--O--O-O--O-O--O--O-O--O--O-O--O--O-O--O--O-O--O------------+


sched_debug.cpu#10.ttwu_count

3e+06 ++-*---------------------------------------------------*----------+
*. : .*.. .*.. : *.. |
2.5e+06 ++ : .*..* *..* *..*. : *. |
| : .*. *..*..*. : *..|
| : .*..* * *..: *
2e+06 ++ *. : + * |
| : + |
1.5e+06 ++ * |
| |
1e+06 ++ |
| |
| |
500000 ++ |
| |
0 O+-O-O--O--O-O--O-O--O--O-O--O--O-O--O--O-O--O--O-O--O------------+


sched_debug.cpu#11.nr_switches

7e+06 ++------------------------------------------------------------------+
| .*.. .*.. |
6e+06 *+.* *.. .* *. *.. .* |
| : + *. *.*.. + *. : |
5e+06 ++ : .* + * + : .*
| : *. + *..*..*.* : * *. |
4e+06 ++ : + + + : : |
| :+ * : : |
3e+06 ++ * * |
| |
2e+06 ++ |
| |
1e+06 ++ |
| |
0 O+-O-O--O--O--O-O--O--O-O--O--O--O-O--O--O--O-O--O--O-O-------------+


sched_debug.cpu#11.sched_count

7e+06 ++------------------------------------------------------------------+
| .*.. .*.. |
6e+06 *+.* *.. .* *. *.. .* |
| : + *. *.*.. + *. : |
5e+06 ++ : .* + * + : .*
| : *. + *..*..*.* : * *. |
4e+06 ++ : + + + : : |
| :+ * : : |
3e+06 ++ * * |
| |
2e+06 ++ |
| |
1e+06 ++ |
| |
0 O+-O-O--O--O--O-O--O--O-O--O--O--O-O--O--O--O-O--O--O-O-------------+


sched_debug.cpu#11.sched_goidle

3.5e+06 ++----------------------------------------------------------------+
| .*.. .* |
3e+06 *+.* *.. .* *. + *.. .* |
| : : *. *..*.. : *. : |
2.5e+06 ++ : .* : * : : .*
| : *. : *.*..*..* : * *. |
2e+06 ++ : + : .. : + |
| :+ * :+ |
1.5e+06 ++ * * |
| |
1e+06 ++ |
| |
500000 ++ |
| |
0 O+-O-O--O--O-O--O-O--O--O-O--O--O-O--O--O-O--O--O-O--O------------+


sched_debug.cpu#11.ttwu_count

3.5e+06 ++----------------------------------------------------------------+
| .* *.. |
3e+06 *+. *. : *..*..*. .*. + *..*..* |
| : : : *..*. *..*.. + * + : *
2.5e+06 ++ * : : : *.* : * : ..|
| + : : * : .. * |
2e+06 ++ * : + * |
| :+ |
1.5e+06 ++ * |
| |
1e+06 ++ |
| |
500000 ++ |
| |
0 O+-O-O--O--O-O--O-O--O--O-O--O--O-O--O--O-O--O--O-O--O------------+


[*] bisect-good sample
[O] bisect-bad sample

To reproduce:

apt-get install ruby
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/setup-local job.yaml # the job file attached in this email
bin/run-local job.yaml


Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


Thanks,
Huang, Ying

---
testcase: will-it-scale
default-monitors:
wait: pre-test
uptime:
iostat:
vmstat:
numa-numastat:
numa-vmstat:
numa-meminfo:
proc-vmstat:
proc-stat:
meminfo:
slabinfo:
interrupts:
lock_stat:
latency_stats:
softirqs:
bdi_dev_mapping:
diskstats:
nfsstat:
cpuidle:
cpufreq-stats:
turbostat:
pmeter:
sched_debug:
interval: 10
default_watchdogs:
watch-oom:
watchdog:
cpufreq_governor: performance
commit: 51391ce3d59376ccc2fccc3636f1e9fa74ef5d1a
model: Westmere
memory: 6G
nr_hdd_partitions: 1
hdd_partitions:
swap_partitions:
rootfs_partition:
netconsole_port: 6667
perf-profile:
freq: 800
will-it-scale:
test: writeseek3
testbox: wsm
tbox_group: wsm
kconfig: x86_64-rhel
enqueue_time: 2015-02-13 20:31:42.687116209 +08:00
head_commit: 51391ce3d59376ccc2fccc3636f1e9fa74ef5d1a
base_commit: bfa76d49576599a4b9f9b7a71f23d73d6dcff735
branch: linux-devel/devel-hourly-2015021304
kernel: "/kernel/x86_64-rhel/51391ce3d59376ccc2fccc3636f1e9fa74ef5d1a/vmlinuz-3.19.0-g51391ce"
user: lkp
queue: cyclic
rootfs: debian-x86_64-2015-02-07.cgz
result_root: "/result/wsm/will-it-scale/performance-writeseek3/debian-x86_64-2015-02-07.cgz/x86_64-rhel/51391ce3d59376ccc2fccc3636f1e9fa74ef5d1a/0"
job_file: "/lkp/scheduled/wsm/cyclic_will-it-scale-performance-writeseek3-x86_64-rhel-HEAD-51391ce3d59376ccc2fccc3636f1e9fa74ef5d1a-0-20150213-31485-u5pvqj.yaml"
dequeue_time: 2015-02-14 10:44:22.812100665 +08:00
nr_cpu: "$(nproc)"
job_state: finished
loadavg: 8.51 5.01 2.06 1/160 5654
start_time: '1423881888'
end_time: '1423882192'
version: "/lkp/lkp/.src-20150213-094846"
./runtest.py writeseek3 32 both 1 6 9 12
_______________________________________________
LKP mailing list
LKP@xxxxxxxxxxxxxxx