[LKP] [locking/rwsem] 1a99367023f: no primary result change, +23.6% will-it-scale.time.system_time
From: Huang Ying
Date: Thu Mar 12 2015 - 02:35:50 EST
FYI, we noticed the below changes on
git://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git master
commit 1a99367023f6ac664365a37fa508b059e31d0e88 ("locking/rwsem: Check for active lock before bailing on spinning")
There is some minor will-it-scale.per_thread_ops changes below (-1.8%), but not stable enough during bisect.
So in general, there is no user visible change, just more system time.
testbox/testcase/testparams: ivb42/will-it-scale/performance-brk1
b3fd4f03ca0b9952 1a99367023f6ac664365a37fa5
---------------- --------------------------
fail:runs %reproduction fail:runs
| | |
:4 25% 1:4 kmsg.Spurious_LAPIC_timer_interrupt_on_cpu
%stddev %change %stddev
\ | \
308 Â 3% +23.6% 381 Â 1% will-it-scale.time.system_time
99 Â 3% +20.2% 119 Â 0% will-it-scale.time.percent_of_cpu_this_job_got
34098838 Â 1% +6.0% 36159517 Â 2% will-it-scale.time.voluntary_context_switches
314 Â 0% +2.5% 322 Â 0% will-it-scale.time.elapsed_time
314 Â 0% +2.5% 322 Â 0% will-it-scale.time.elapsed_time.max
0.61 Â 20% +428.8% 3.21 Â 5% perf-profile.cpu-cycles.osq_lock.rwsem_down_write_failed.call_rwsem_down_write_failed.sys_brk.system_call_fastpath
0.39 Â 23% +127.3% 0.88 Â 14% perf-profile.cpu-cycles.osq_unlock.rwsem_down_write_failed.call_rwsem_down_write_failed.sys_brk.system_call_fastpath
991202 Â 25% -47.8% 517752 Â 41% sched_debug.cpu#5.sched_count
481295 Â 25% -48.0% 250449 Â 42% sched_debug.cpu#5.sched_goidle
963157 Â 25% -47.9% 501898 Â 42% sched_debug.cpu#5.nr_switches
5.03 Â 16% +133.3% 11.73 Â 7% perf-profile.cpu-cycles.rwsem_spin_on_owner.rwsem_down_write_failed.call_rwsem_down_write_failed.sys_brk.system_call_fastpath
185603 Â 45% +99.3% 369978 Â 34% sched_debug.cpu#9.ttwu_count
17 Â 20% +75.0% 29 Â 35% sched_debug.cfs_rq[33]:/.load
1.07 Â 13% +88.8% 2.02 Â 17% perf-profile.cpu-cycles.__kernel_text_address.print_context_stack.dump_trace.save_stack_trace_tsk.__account_scheduler_latency
2.41 Â 9% +92.7% 4.64 Â 12% perf-profile.cpu-cycles._raw_spin_lock_irqsave.rwsem_wake.call_rwsem_wake.sys_brk.system_call_fastpath
1201 Â 30% -45.8% 651 Â 21% cpuidle.C3-IVT.usage
1.92 Â 3% -39.8% 1.16 Â 19% perf-profile.cpu-cycles._raw_spin_lock.try_to_wake_up.wake_up_process.__rwsem_do_wake.rwsem_wake
1.10 Â 10% +93.6% 2.12 Â 5% perf-profile.cpu-cycles.up_write.vma_adjust.vma_merge.do_brk.sys_brk
6 Â 17% +92.3% 12 Â 22% sched_debug.cfs_rq[6]:/.runnable_load_avg
2.02 Â 13% +95.2% 3.94 Â 20% perf-profile.cpu-cycles.print_context_stack.dump_trace.save_stack_trace_tsk.__account_scheduler_latency.enqueue_entity
6 Â 36% +52.0% 9 Â 17% sched_debug.cpu#6.cpu_load[2]
2.63 Â 14% +95.0% 5.13 Â 18% perf-profile.cpu-cycles.__account_scheduler_latency.enqueue_entity.enqueue_task_fair.enqueue_task.activate_task
5 Â 20% +66.7% 8 Â 9% sched_debug.cpu#6.cpu_load[3]
2.41 Â 14% +93.1% 4.66 Â 19% perf-profile.cpu-cycles.save_stack_trace_tsk.__account_scheduler_latency.enqueue_entity.enqueue_task_fair.enqueue_task
2.34 Â 14% +94.5% 4.55 Â 18% perf-profile.cpu-cycles.dump_trace.save_stack_trace_tsk.__account_scheduler_latency.enqueue_entity.enqueue_task_fair
0.96 Â 13% +71.0% 1.65 Â 13% perf-profile.cpu-cycles.find_vma.sys_brk.system_call_fastpath
17462 Â 4% +15.1% 20096 Â 6% sched_debug.cfs_rq[4]:/.exec_clock
82 Â 24% +116.2% 177 Â 46% sched_debug.cfs_rq[27]:/.tg_load_contrib
155743 Â 31% +81.1% 281980 Â 34% sched_debug.cpu#14.sched_count
13.98 Â 6% +63.6% 22.87 Â 3% perf-profile.cpu-cycles.call_rwsem_down_write_failed.sys_brk.system_call_fastpath
13.94 Â 6% +63.5% 22.78 Â 3% perf-profile.cpu-cycles.rwsem_down_write_failed.call_rwsem_down_write_failed.sys_brk.system_call_fastpath
702 Â 12% -39.2% 427 Â 16% cpuidle.C1E-IVT.usage
103116 Â 29% +82.1% 187794 Â 22% sched_debug.cpu#41.sched_goidle
206574 Â 29% +82.0% 375906 Â 22% sched_debug.cpu#41.nr_switches
214754 Â 29% +79.4% 385314 Â 22% sched_debug.cpu#41.sched_count
5 Â 8% +52.4% 8 Â 8% sched_debug.cpu#6.cpu_load[4]
67108 Â 40% +86.7% 125260 Â 35% sched_debug.cpu#14.sched_goidle
134740 Â 40% +86.4% 251133 Â 35% sched_debug.cpu#14.nr_switches
1.42 Â 8% -33.0% 0.95 Â 6% perf-profile.cpu-cycles.cpuidle_select.cpu_startup_entry.start_secondary
1.27 Â 6% -34.1% 0.83 Â 7% perf-profile.cpu-cycles.menu_select.cpuidle_select.cpu_startup_entry.start_secondary
1.28 Â 7% +44.9% 1.85 Â 8% perf-profile.cpu-cycles.find_vma.do_munmap.sys_brk.system_call_fastpath
2.69 Â 4% +36.6% 3.68 Â 4% perf-profile.cpu-cycles.vma_adjust.vma_merge.do_brk.sys_brk.system_call_fastpath
40423 Â 2% +11.6% 45108 Â 3% sched_debug.cpu#6.nr_load_updates
1.24 Â 7% -31.6% 0.85 Â 11% perf-profile.cpu-cycles.check_preempt_curr.ttwu_do_wakeup.ttwu_do_activate.try_to_wake_up.wake_up_process
5.67 Â 5% -29.9% 3.98 Â 7% perf-profile.cpu-cycles.perf_event_mmap.do_brk.sys_brk.system_call_fastpath
3.09 Â 1% -30.0% 2.16 Â 3% perf-profile.cpu-cycles.dequeue_task_fair.dequeue_task.deactivate_task.__sched_text_start.schedule
2.08 Â 7% -29.9% 1.46 Â 12% perf-profile.cpu-cycles.ttwu_do_wakeup.ttwu_do_activate.try_to_wake_up.wake_up_process.__rwsem_do_wake
2.52 Â 1% -30.1% 1.76 Â 3% perf-profile.cpu-cycles.dequeue_entity.dequeue_task_fair.dequeue_task.deactivate_task.__sched_text_start
3.20 Â 2% -28.8% 2.28 Â 4% perf-profile.cpu-cycles.pick_next_task_fair.__sched_text_start.schedule.schedule_preempt_disabled.cpu_startup_entry
10.55 Â 6% -24.9% 7.92 Â 4% perf-profile.cpu-cycles.cpuidle_enter_state.cpuidle_enter.cpu_startup_entry.start_secondary
1.16 Â 2% -30.1% 0.81 Â 9% perf-profile.cpu-cycles.free_pgtables.unmap_region.do_munmap.sys_brk.system_call_fastpath
1.10 Â 15% -28.9% 0.78 Â 5% perf-profile.cpu-cycles.unmap_page_range.unmap_single_vma.unmap_vmas.unmap_region.do_munmap
18033 Â 2% +10.5% 19932 Â 6% sched_debug.cfs_rq[6]:/.exec_clock
5.61 Â 1% -27.5% 4.07 Â 4% perf-profile.cpu-cycles.__sched_text_start.schedule.schedule_preempt_disabled.cpu_startup_entry.start_secondary
5.74 Â 2% -27.5% 4.16 Â 5% perf-profile.cpu-cycles.schedule_preempt_disabled.cpu_startup_entry.start_secondary
3.30 Â 1% -28.5% 2.36 Â 4% perf-profile.cpu-cycles.dequeue_task.deactivate_task.__sched_text_start.schedule.rwsem_down_write_failed
5.67 Â 1% -27.3% 4.12 Â 4% perf-profile.cpu-cycles.schedule.schedule_preempt_disabled.cpu_startup_entry.start_secondary
3.31 Â 1% -28.5% 2.37 Â 3% perf-profile.cpu-cycles.deactivate_task.__sched_text_start.schedule.rwsem_down_write_failed.call_rwsem_down_write_failed
4.74 Â 5% -29.1% 3.36 Â 6% perf-profile.cpu-cycles.perf_event_aux.perf_event_mmap.do_brk.sys_brk.system_call_fastpath
10.92 Â 5% -24.4% 8.25 Â 4% perf-profile.cpu-cycles.cpuidle_enter.cpu_startup_entry.start_secondary
6.64 Â 2% -28.1% 4.77 Â 3% perf-profile.cpu-cycles.schedule.rwsem_down_write_failed.call_rwsem_down_write_failed.sys_brk.system_call_fastpath
6.51 Â 2% -28.2% 4.67 Â 4% perf-profile.cpu-cycles.__sched_text_start.schedule.rwsem_down_write_failed.call_rwsem_down_write_failed.sys_brk
1.77 Â 3% -29.4% 1.25 Â 9% perf-profile.cpu-cycles.set_next_entity.pick_next_task_fair.__sched_text_start.schedule.schedule_preempt_disabled
1.32 Â 5% +33.2% 1.77 Â 9% perf-profile.cpu-cycles.up_write.sys_brk.system_call_fastpath
205 Â 11% +20.3% 247 Â 13% sched_debug.cpu#33.ttwu_local
5926 Â 3% +38.9% 8234 Â 23% sched_debug.cfs_rq[20]:/.exec_clock
244 Â 9% -26.4% 179 Â 8% sched_debug.cpu#26.ttwu_local
354306 Â 9% -20.2% 282834 Â 2% cpuidle.C6-IVT.usage
0.98 Â 9% -25.8% 0.73 Â 7% perf-profile.cpu-cycles.update_cfs_shares.dequeue_entity.dequeue_task_fair.dequeue_task.deactivate_task
17515 Â 3% +10.5% 19349 Â 3% sched_debug.cfs_rq[10]:/.exec_clock
0.79 Â 15% -27.5% 0.57 Â 13% perf-profile.cpu-cycles.resched_curr.ttwu_do_wakeup.ttwu_do_activate.try_to_wake_up.wake_up_process
3.13 Â 2% +27.9% 4.00 Â 4% perf-profile.cpu-cycles.vma_merge.do_brk.sys_brk.system_call_fastpath
29.84 Â 2% -25.2% 22.32 Â 3% perf-profile.cpu-cycles.start_secondary
29.69 Â 2% -25.2% 22.21 Â 3% perf-profile.cpu-cycles.cpu_startup_entry.start_secondary
1.93 Â 5% -24.9% 1.45 Â 8% perf-profile.cpu-cycles.perf_event_aux_ctx.perf_event_aux.perf_event_mmap.do_brk.sys_brk
3.67 Â 4% -23.4% 2.81 Â 3% perf-profile.cpu-cycles.tick_nohz_stop_sched_tick.__tick_nohz_idle_enter.tick_nohz_idle_enter.cpu_startup_entry.start_secondary
308 Â 3% +23.6% 381 Â 1% time.system_time
1.35 Â 11% -22.7% 1.05 Â 3% perf-profile.cpu-cycles.unmap_single_vma.unmap_vmas.unmap_region.do_munmap.sys_brk
4.09 Â 3% -22.8% 3.16 Â 3% perf-profile.cpu-cycles.__tick_nohz_idle_enter.tick_nohz_idle_enter.cpu_startup_entry.start_secondary
1.05 Â 8% -19.7% 0.84 Â 2% perf-profile.cpu-cycles.lapic_next_deadline.clockevents_program_event.tick_program_event.__hrtimer_start_range_ns.hrtimer_start_range_ns
4.38 Â 3% -22.9% 3.38 Â 4% perf-profile.cpu-cycles.tick_nohz_idle_enter.cpu_startup_entry.start_secondary
99 Â 3% +20.2% 119 Â 0% time.percent_of_cpu_this_job_got
4.31 Â 4% -22.4% 3.35 Â 3% perf-profile.cpu-cycles.tick_nohz_idle_exit.cpu_startup_entry.start_secondary
5808 Â 4% +26.5% 7349 Â 14% sched_debug.cfs_rq[13]:/.exec_clock
2.05 Â 7% -21.1% 1.61 Â 2% perf-profile.cpu-cycles.hrtimer_start_range_ns.tick_nohz_restart.tick_nohz_idle_exit.cpu_startup_entry.start_secondary
1.58 Â 6% +21.9% 1.92 Â 8% perf-profile.cpu-cycles.anon_vma_clone.__split_vma.do_munmap.sys_brk.system_call_fastpath
1.43 Â 10% -20.5% 1.14 Â 2% perf-profile.cpu-cycles.tick_program_event.__hrtimer_start_range_ns.hrtimer_start_range_ns.tick_nohz_restart.tick_nohz_idle_exit
9.35 Â 1% -19.6% 7.52 Â 5% perf-profile.cpu-cycles.intel_idle.cpuidle_enter_state.cpuidle_enter.cpu_startup_entry.start_secondary
6081 Â 7% +14.6% 6970 Â 5% sched_debug.cfs_rq[15]:/.exec_clock
6.83 Â 3% +17.1% 8.00 Â 10% perf-profile.cpu-cycles.enqueue_entity.enqueue_task_fair.enqueue_task.activate_task.ttwu_do_activate
5956 Â 10% +17.5% 7000 Â 8% sched_debug.cfs_rq[17]:/.exec_clock
2.02 Â 6% -20.7% 1.60 Â 2% perf-profile.cpu-cycles.__hrtimer_start_range_ns.hrtimer_start_range_ns.tick_nohz_restart.tick_nohz_idle_exit.cpu_startup_entry
2.57 Â 4% -19.3% 2.08 Â 4% perf-profile.cpu-cycles.hrtimer_start.tick_nohz_stop_sched_tick.__tick_nohz_idle_enter.tick_nohz_idle_enter.cpu_startup_entry
246343 Â 1% +9.9% 270748 Â 3% sched_debug.cfs_rq[14]:/.min_vruntime
1.40 Â 9% -20.3% 1.11 Â 2% perf-profile.cpu-cycles.clockevents_program_event.tick_program_event.__hrtimer_start_range_ns.hrtimer_start_range_ns.tick_nohz_restart
17631 Â 1% +15.3% 20334 Â 6% sched_debug.cfs_rq[8]:/.exec_clock
2.53 Â 4% -19.1% 2.05 Â 4% perf-profile.cpu-cycles.__hrtimer_start_range_ns.hrtimer_start.tick_nohz_stop_sched_tick.__tick_nohz_idle_enter.tick_nohz_idle_enter
2222 Â 7% +8.8% 2419 Â 7% sched_debug.cpu#35.curr->pid
2.95 Â 7% -18.9% 2.40 Â 2% perf-profile.cpu-cycles.tick_nohz_restart.tick_nohz_idle_exit.cpu_startup_entry.start_secondary
6200 Â 7% +17.9% 7311 Â 4% sched_debug.cfs_rq[14]:/.exec_clock
7.35 Â 3% +13.7% 8.36 Â 9% perf-profile.cpu-cycles.enqueue_task_fair.enqueue_task.activate_task.ttwu_do_activate.try_to_wake_up
30922 Â 2% +14.9% 35531 Â 1% sched_debug.cpu#15.nr_load_updates
5940 Â 2% +28.2% 7617 Â 22% sched_debug.cfs_rq[18]:/.exec_clock
1.13 Â 18% +39.4% 1.57 Â 12% perf-profile.cpu-cycles.down_write.sys_brk.system_call_fastpath
10.54 Â 3% -12.9% 9.19 Â 3% perf-profile.cpu-cycles.do_brk.sys_brk.system_call_fastpath
40246 Â 1% +13.2% 45575 Â 3% sched_debug.cpu#8.nr_load_updates
30869 Â 2% +19.1% 36767 Â 4% sched_debug.cpu#20.nr_load_updates
2178 Â 4% +11.7% 2433 Â 6% sched_debug.cpu#33.curr->pid
17639 Â 2% +13.5% 20020 Â 6% sched_debug.cfs_rq[11]:/.exec_clock
39954 Â 2% +13.5% 45359 Â 2% sched_debug.cpu#4.nr_load_updates
2483 Â 2% -9.9% 2238 Â 4% time.involuntary_context_switches
31335 Â 1% +13.6% 35597 Â 3% sched_debug.cpu#13.nr_load_updates
30990 Â 3% +13.9% 35313 Â 2% sched_debug.cpu#17.nr_load_updates
246393 Â 3% +13.8% 280499 Â 6% sched_debug.cfs_rq[18]:/.min_vruntime
31272 Â 2% +14.6% 35823 Â 1% sched_debug.cpu#14.nr_load_updates
242514 Â 1% +13.0% 274042 Â 4% sched_debug.cfs_rq[13]:/.min_vruntime
17452 Â 4% +14.8% 20038 Â 4% sched_debug.cfs_rq[9]:/.exec_clock
39962 Â 3% +13.9% 45502 Â 3% sched_debug.cpu#9.nr_load_updates
31046 Â 1% +16.6% 36199 Â 5% sched_debug.cpu#18.nr_load_updates
62.38 Â 1% +14.6% 71.51 Â 1% perf-profile.cpu-cycles.sys_brk.system_call_fastpath
62.72 Â 1% +14.4% 71.76 Â 1% perf-profile.cpu-cycles.system_call_fastpath
39742 Â 2% +11.1% 44168 Â 1% sched_debug.cpu#10.nr_load_updates
30858596 Â 1% +11.6% 34423247 Â 3% cpuidle.C1-IVT.usage
3.52 Â 4% -8.0% 3.24 Â 3% perf-profile.cpu-cycles.unmap_region.do_munmap.sys_brk.system_call_fastpath
243796 Â 1% +10.9% 270426 Â 2% sched_debug.cfs_rq[16]:/.min_vruntime
16.93 Â 2% -13.5% 14.65 Â 6% perf-profile.cpu-cycles.try_to_wake_up.wake_up_process.__rwsem_do_wake.rwsem_wake.call_rwsem_wake
29303 Â 4% +11.6% 32702 Â 4% sched_debug.cpu#12.nr_load_updates
245510 Â 0% +10.3% 270675 Â 2% sched_debug.cfs_rq[19]:/.min_vruntime
244024 Â 1% +10.4% 269379 Â 1% sched_debug.cfs_rq[15]:/.min_vruntime
17.91 Â 2% -12.8% 15.62 Â 5% perf-profile.cpu-cycles.wake_up_process.__rwsem_do_wake.rwsem_wake.call_rwsem_wake.sys_brk
15043 Â 3% -8.3% 13799 Â 4% slabinfo.kmalloc-512.num_objs
246096 Â 0% +11.1% 273409 Â 3% sched_debug.cfs_rq[12]:/.min_vruntime
18.14 Â 2% -12.5% 15.87 Â 5% perf-profile.cpu-cycles.__rwsem_do_wake.rwsem_wake.call_rwsem_wake.sys_brk.system_call_fastpath
17738 Â 2% +11.4% 19752 Â 5% sched_debug.cfs_rq[1]:/.exec_clock
31513 Â 1% +13.4% 35747 Â 3% sched_debug.cpu#16.nr_load_updates
14995 Â 3% -8.2% 13765 Â 4% slabinfo.kmalloc-512.active_objs
39689 Â 2% +12.7% 44717 Â 2% sched_debug.cpu#11.nr_load_updates
31173 Â 2% +10.8% 34530 Â 0% sched_debug.cpu#19.nr_load_updates
2900 Â 2% +8.2% 3137 Â 6% slabinfo.kmalloc-2048.active_objs
50519 Â 2% +9.3% 55204 Â 0% sched_debug.cpu#43.nr_load_updates
754899 Â 3% -7.2% 700567 Â 5% sched_debug.cpu#35.avg_idle
2189 Â 6% -6.5% 2046 Â 4% sched_debug.cpu#47.curr->pid
245137 Â 1% +10.9% 271884 Â 1% sched_debug.cfs_rq[20]:/.min_vruntime
252683 Â 2% +6.8% 269903 Â 2% sched_debug.cfs_rq[22]:/.min_vruntime
250553 Â 4% +7.3% 268896 Â 2% sched_debug.cfs_rq[21]:/.min_vruntime
40942 Â 4% +10.5% 45255 Â 2% sched_debug.cpu#1.nr_load_updates
19657 Â 4% -9.8% 17725 Â 5% vmstat.system.in
27.10 Â 0% -0.7% 26.90 Â 0% turbostat.%Busy
4.10 Â 0% -2.4% 4.00 Â 0% turbostat.RAMWatt
testbox/testcase/testparams: lituya/will-it-scale/performance-brk1
b3fd4f03ca0b9952 1a99367023f6ac664365a37fa5
---------------- --------------------------
239 Â 1% +32.0% 316 Â 3% will-it-scale.time.system_time
80 Â 1% +30.4% 105 Â 3% will-it-scale.time.percent_of_cpu_this_job_got
52295908 Â 1% -5.4% 49462338 Â 0% will-it-scale.time.voluntary_context_switches
728289 Â 0% -1.8% 715194 Â 0% will-it-scale.per_thread_ops
63 Â 48% -36.9% 40 Â 7% sched_debug.cpu#12.load
223957 Â 16% -62.8% 83209 Â 16% cpuidle.C6-HSW.usage
31 Â 16% +116.1% 67 Â 34% sched_debug.cpu#14.load
80 Â 34% -60.7% 31 Â 20% sched_debug.cpu#2.load
73 Â 25% -53.4% 34 Â 12% sched_debug.cfs_rq[2]:/.load
300986 Â 24% -40.3% 179777 Â 42% sched_debug.cfs_rq[4]:/.min_vruntime
346 Â 33% +91.1% 662 Â 25% cpuidle.POLL.usage
1212812 Â 35% -44.7% 670407 Â 25% sched_debug.cpu#2.ttwu_count
144641 Â 35% -62.3% 54518 Â 15% sched_debug.cpu#6.ttwu_local
33 Â 25% +90.2% 63 Â 34% sched_debug.cfs_rq[14]:/.load
1377774 Â 40% +210.8% 4282777 Â 48% sched_debug.cpu#9.sched_count
34 Â 10% +109.4% 72 Â 43% sched_debug.cpu#14.cpu_load[0]
681074 Â 40% +210.0% 2111486 Â 49% sched_debug.cpu#9.sched_goidle
1362573 Â 40% +210.0% 4223660 Â 49% sched_debug.cpu#9.nr_switches
327 Â 7% +81.2% 593 Â 14% sched_debug.cfs_rq[14]:/.tg_load_contrib
588875 Â 12% +78.6% 1051474 Â 12% sched_debug.cpu#6.sched_count
292062 Â 13% +77.6% 518637 Â 12% sched_debug.cpu#6.sched_goidle
585096 Â 13% +77.5% 1038414 Â 12% sched_debug.cpu#6.nr_switches
262640 Â 6% -41.6% 153289 Â 11% sched_debug.cfs_rq[6]:/.min_vruntime
148498 Â 46% +113.4% 316963 Â 13% sched_debug.cpu#1.ttwu_local
1385681 Â 22% +86.1% 2578972 Â 18% sched_debug.cpu#8.ttwu_count
296 Â 9% +80.1% 533 Â 17% sched_debug.cfs_rq[14]:/.blocked_load_avg
24472 Â 25% -40.1% 14663 Â 48% sched_debug.cfs_rq[4]:/.exec_clock
32 Â 7% +79.7% 57 Â 34% sched_debug.cpu#14.cpu_load[1]
1650425 Â 13% -37.7% 1027432 Â 29% sched_debug.cpu#14.ttwu_count
57 Â 14% +36.2% 78 Â 10% sched_debug.cpu#0.load
43412 Â 13% -26.2% 32048 Â 22% sched_debug.cfs_rq[2]:/.exec_clock
33 Â 6% +67.7% 55 Â 21% sched_debug.cpu#13.cpu_load[0]
64 Â 17% -22.5% 50 Â 19% sched_debug.cpu#9.cpu_load[0]
53 Â 14% +34.4% 72 Â 5% sched_debug.cfs_rq[0]:/.load
31 Â 7% +53.5% 48 Â 22% sched_debug.cpu#14.cpu_load[2]
29 Â 10% +47.9% 43 Â 15% sched_debug.cpu#13.cpu_load[1]
32 Â 16% -36.9% 20 Â 24% sched_debug.cpu#4.cpu_load[1]
30 Â 5% +36.9% 41 Â 10% sched_debug.cpu#14.cpu_load[4]
31 Â 5% +40.5% 44 Â 14% sched_debug.cpu#14.cpu_load[3]
520038 Â 13% -19.4% 419303 Â 18% sched_debug.cfs_rq[2]:/.min_vruntime
469998 Â 13% +31.3% 616962 Â 12% sched_debug.cfs_rq[10]:/.min_vruntime
36098 Â 10% -40.6% 21432 Â 4% sched_debug.cpu#6.nr_load_updates
21 Â 26% -33.7% 14 Â 13% sched_debug.cpu#4.cpu_load[4]
1178 Â 12% +43.7% 1694 Â 13% sched_debug.cpu#14.curr->pid
21211 Â 10% -38.2% 13103 Â 12% sched_debug.cfs_rq[6]:/.exec_clock
39866 Â 16% +32.5% 52814 Â 16% sched_debug.cfs_rq[10]:/.exec_clock
24 Â 23% -35.1% 15 Â 15% sched_debug.cpu#4.cpu_load[3]
1.38 Â 12% -15.0% 1.17 Â 4% perf-profile.cpu-cycles.avc_has_perm_noaudit.cred_has_capability.selinux_capable.selinux_vm_enough_memory.security_vm_enough_memory_mm
239 Â 1% +32.0% 316 Â 3% time.system_time
394 Â 7% +26.1% 497 Â 5% sched_debug.cfs_rq[14]:/.tg_runnable_contrib
18088 Â 7% +26.4% 22861 Â 5% sched_debug.cfs_rq[14]:/.avg->runnable_avg_sum
39 Â 17% -33.8% 26 Â 29% sched_debug.cpu#4.cpu_load[0]
1228 Â 4% +32.1% 1622 Â 6% sched_debug.cpu#0.curr->pid
80 Â 1% +30.4% 105 Â 3% time.percent_of_cpu_this_job_got
27 Â 18% -34.9% 17 Â 19% sched_debug.cpu#4.cpu_load[2]
401 Â 9% -14.3% 344 Â 11% sched_debug.cfs_rq[2]:/.tg_runnable_contrib
6.50 Â 1% -20.3% 5.19 Â 7% time.user_time
502752 Â 2% +29.0% 648466 Â 3% sched_debug.cfs_rq[14]:/.min_vruntime
53 Â 12% +23.6% 65 Â 4% sched_debug.cfs_rq[0]:/.runnable_load_avg
353 Â 12% +28.8% 455 Â 7% sched_debug.cfs_rq[12]:/.tg_runnable_contrib
16234 Â 12% +28.8% 20903 Â 7% sched_debug.cfs_rq[12]:/.avg->runnable_avg_sum
43966 Â 4% +26.9% 55773 Â 4% sched_debug.cfs_rq[14]:/.exec_clock
1344 Â 8% +15.4% 1552 Â 4% sched_debug.cpu#11.curr->pid
1080 Â 1% -15.6% 912 Â 4% time.involuntary_context_switches
433 Â 3% +16.3% 504 Â 2% sched_debug.cfs_rq[9]:/.tg_runnable_contrib
19913 Â 3% +16.5% 23191 Â 2% sched_debug.cfs_rq[9]:/.avg->runnable_avg_sum
54233 Â 6% +14.0% 61828 Â 6% sched_debug.cpu#14.nr_load_updates
48889 Â 7% +17.7% 57564 Â 2% sched_debug.cfs_rq[9]:/.exec_clock
59096 Â 3% +11.9% 66139 Â 4% sched_debug.cpu#9.nr_load_updates
53 Â 12% +18.8% 63 Â 7% sched_debug.cpu#0.cpu_load[0]
13853 Â 10% +35.6% 18786 Â 10% vmstat.system.in
346546 Â 1% -5.3% 328077 Â 0% vmstat.system.cs
1146 Â 0% +4.2% 1195 Â 0% turbostat.Avg_MHz
34.76 Â 0% +4.2% 36.23 Â 0% turbostat.%Busy
ivb42: Ivytown Ivy Bridge-EP
Memory: 64G
lituya: Grantley Haswell
Memory: 16G
will-it-scale.time.percent_of_cpu_this_job_got
125 ++--------------------------------------------------------------------+
O |
120 ++ O O O O |
115 ++ O O O O O O |
| O O O O O |
110 ++ O O O |
| |
105 ++ * |
| + + |
100 ++ .*..*.. + + .*
95 ++ *...*.. ..*.. .. . ..*..*.. + *...*. |
| .. *. * *.. .*. . .* |
90 ++..* *...*. *. |
*. |
85 ++--------------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
To reproduce:
apt-get install ruby
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/setup-local job.yaml # the job file attached in this email
bin/run-local job.yaml
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Ying Huang
---
testcase: will-it-scale
default-monitors:
wait: pre-test
uptime:
iostat:
vmstat:
numa-numastat:
numa-vmstat:
numa-meminfo:
proc-vmstat:
proc-stat:
meminfo:
slabinfo:
interrupts:
lock_stat:
latency_stats:
softirqs:
bdi_dev_mapping:
diskstats:
nfsstat:
cpuidle:
cpufreq-stats:
turbostat:
pmeter:
sched_debug:
interval: 10
default_watchdogs:
watch-oom:
watchdog:
cpufreq_governor: performance
commit: 5aaa6b9f28b1c0c7623dcfd8d87e1d0e8bd4dde6
model: Ivytown Ivy Bridge-EP
nr_cpu: 48
memory: 64G
rootfs: debian-x86_64-2015-02-07.cgz
perf-profile:
freq: 800
will-it-scale:
test: brk1
testbox: ivb42
tbox_group: ivb42
kconfig: x86_64-rhel
enqueue_time: 2015-03-08 22:23:51.602528045 +08:00
head_commit: 5aaa6b9f28b1c0c7623dcfd8d87e1d0e8bd4dde6
base_commit: 9eccca0843205f87c00404b663188b88eb248051
branch: linux-devel/devel-hourly-2015030907
kernel: "/kernel/x86_64-rhel/5aaa6b9f28b1c0c7623dcfd8d87e1d0e8bd4dde6/vmlinuz-4.0.0-rc3-01051-g5aaa6b9"
user: lkp
queue: cyclic
result_root: "/result/ivb42/will-it-scale/performance-brk1/debian-x86_64-2015-02-07.cgz/x86_64-rhel/5aaa6b9f28b1c0c7623dcfd8d87e1d0e8bd4dde6/0"
job_file: "/lkp/scheduled/ivb42/cyclic_will-it-scale-performance-brk1-debian-x86_64.cgz-x86_64-rhel-HEAD-5aaa6b9f28b1c0c7623dcfd8d87e1d0e8bd4dde6-0-20150308-40925-1ugo6xp.yaml"
dequeue_time: 2015-03-09 08:52:18.512200890 +08:00
job_state: finished
loadavg: 29.17 17.88 7.43 1/420 10748
start_time: '1425862384'
end_time: '1425862718'
version: "/lkp/lkp/.src-20150308-175746"
echo performance > /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu1/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu10/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu11/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu12/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu13/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu14/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu15/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu16/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu17/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu18/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu19/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu2/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu20/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu21/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu22/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu23/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu24/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu25/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu26/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu27/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu28/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu29/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu3/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu30/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu31/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu32/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu33/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu34/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu35/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu36/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu37/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu38/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu39/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu4/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu40/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu41/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu42/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu43/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu44/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu45/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu46/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu47/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu5/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu6/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu7/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu8/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu9/cpufreq/scaling_governor
./runtest.py brk1 25 both 1 12 24 36 48
_______________________________________________
LKP mailing list
LKP@xxxxxxxxxxxxxxx