[rcu] 5057f55e543: dmesg.BUG:soft_lockup-CPU_stuck_for_s

From: Fengguang Wu
Date: Mon Oct 06 2014 - 01:50:47 EST


Hi Paul,

FYI, we noticed a number of ups and downs for commit

5057f55e543b7859cfd26bc281291795eac93f8a ("rcu: Bind RCU grace-period kthreads if NO_HZ_FULL")

testbox/testcase/testparams: lkp-a04/netperf/300s-200%-SCTP_RR

71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
%stddev %change %stddev
\ | /
3 Â12% +145.5% 9 Â32% sched_debug.cfs_rq[0]:/.nr_spread_over
2438 Â 1% +10.3% 2689 Â 2% sched_debug.cpu#2.curr->pid
21560 Â 5% -24.1% 16358 Â14% sched_debug.cpu#2.sched_goidle
10790 Â 9% +64.7% 17776 Â 8% sched_debug.cpu#1.sched_goidle
20210 Â20% -25.9% 14968 Â11% sched_debug.cpu#0.sched_goidle
88244 Â 6% -51.1% 43188 Â 6% meminfo.DirectMap4k
1689 Â 4% +68.5% 2847 Â14% cpuidle.C1E-ATM.usage
2610 Â 3% -9.9% 2353 Â 4% sched_debug.cpu#3.curr->pid
19071 Â 4% -17.9% 15661 Â 1% softirqs.SCHED

testbox/testcase/testparams: lkp-a04/netperf/300s-200%-SCTP_STREAM

71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
21897 Â18% -34.0% 14445 Â 4% sched_debug.cfs_rq[2]:/.exec_clock
115 Â10% -14.1% 99 Â 9% sched_debug.cfs_rq[2]:/.tg_runnable_contrib
20231 Â13% -16.2% 16951 Â 3% sched_debug.cfs_rq[3]:/.exec_clock
1.19 Â37% -53.6% 0.55 Â30% perf-profile.cpu-cycles.filemap_map_pages.do_read_fault.handle_mm_fault.__do_page_fault.do_page_fault
5346 Â10% -13.8% 4607 Â 9% sched_debug.cfs_rq[2]:/.avg->runnable_avg_sum
291 Â24% -38.5% 179 Â 9% sched_debug.cfs_rq[2]:/.load
1.74 Â39% -54.1% 0.80 Â 4% perf-profile.cpu-cycles.__ctype_get_mb_cur_max
0.81 Â39% -43.4% 0.46 Â26% perf-profile.cpu-cycles.unmap_page_range.unmap_single_vma.unmap_vmas.exit_mmap.mmput
18 Â 5% -18.2% 15 Â14% sched_debug.cpu#0.cpu_load[4]
0.89 Â33% -51.5% 0.43 Â27% perf-profile.cpu-cycles.menu_select.cpuidle_select.cpu_startup_entry.start_secondary
1.29 Â23% -47.0% 0.68 Â13% perf-profile.cpu-cycles.intel_idle.cpuidle_enter_state.cpuidle_enter.cpu_startup_entry.rest_init
314 Â 5% -23.3% 241 Â 8% sched_debug.cpu#2.load
1119966 Â19% -33.1% 748978 Â 7% sched_debug.cpu#2.sched_count
262 Â22% -28.0% 189 Â34% sched_debug.cfs_rq[0]:/.tg_load_contrib
115 Â 7% -15.6% 97 Â 4% sched_debug.cfs_rq[0]:/.tg_runnable_contrib
5341 Â 7% -15.7% 4505 Â 4% sched_debug.cfs_rq[0]:/.avg->runnable_avg_sum
488 Â13% +16.7% 569 Â12% sched_debug.cfs_rq[1]:/.load
15 Â21% +26.1% 19 Â19% sched_debug.cfs_rq[3]:/.nr_spread_over
472 Â 9% +18.2% 558 Â12% sched_debug.cpu#1.load
1917 Â 9% +19.1% 2284 Â 2% sched_debug.cpu#1.curr->pid
93022 Â12% -47.0% 49332 Â12% meminfo.DirectMap4k
39 Â33% -40.7% 23 Â27% sched_debug.cfs_rq[2]:/.runnable_load_avg
671451 Â14% +52.9% 1026958 Â10% sched_debug.cpu#1.sched_count
43745 Â 3% +16.6% 51000 Â 6% sched_debug.cpu#1.nr_load_updates
0.96 Â31% -49.3% 0.49 Â34% perf-profile.cpu-cycles.find_busiest_group.load_balance.pick_next_task_fair.__schedule.schedule
64360 Â17% +80.9% 116399 Â15% sched_debug.cfs_rq[1]:/.min_vruntime
36171 Â 6% -27.8% 26101 Â 1% softirqs.RCU
1537 Â 0% -20.1% 1228 Â 7% cpuidle.C2-ATM.usage
16404 Â 4% +20.9% 19837 Â 2% sched_debug.cfs_rq[0]:/.exec_clock
143516 Â23% -34.7% 93764 Â12% sched_debug.cfs_rq[2]:/.min_vruntime
1198 Â 3% -14.8% 1021 Â 7% cpuidle.C4-ATM.usage
42206 Â 2% -17.8% 34709 Â 3% sched_debug.cpu#3.nr_load_updates
10 Â 4% +41.9% 14 Â 8% sched_debug.cfs_rq[2]:/.nr_spread_over
1788 Â 8% -21.7% 1399 Â19% sched_debug.cpu#3.curr->pid
43004 Â10% -25.3% 32112 Â 5% sched_debug.cpu#2.nr_load_updates
28737 Â 1% -18.2% 23511 Â 3% softirqs.SCHED
881 Â 0% +14.3% 1007 Â 8% slabinfo.kmalloc-96.active_objs
881 Â 0% +14.3% 1007 Â 8% slabinfo.kmalloc-96.num_objs
1.10 Â10% -24.3% 0.83 Â18% perf-profile.cpu-cycles.copy_user_generic_string.skb_copy_datagram_iovec.sctp_recvmsg.sock_common_recvmsg.sock_recvmsg
2717 Â18% +27.9% 3474 Â 7% slabinfo.anon_vma.active_objs
2862 Â10% +21.4% 3474 Â 7% slabinfo.anon_vma.num_objs

testbox/testcase/testparams: lkp-a04/netperf/900s-200%-TCP_STREAM

71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
7 Â11% +47.6% 10 Â12% sched_debug.cfs_rq[1]:/.nr_spread_over
2830 Â 7% +11.0% 3142 Â 2% sched_debug.cpu#0.curr->pid
511581 Â12% +28.0% 654949 Â 4% sched_debug.cpu#0.avg_idle
309 Â 7% +20.3% 372 Â10% sched_debug.cfs_rq[3]:/.blocked_load_avg
669223 Â10% +20.7% 808043 Â12% sched_debug.cfs_rq[1]:/.MIN_vruntime
669224 Â10% +20.7% 808044 Â12% sched_debug.cfs_rq[1]:/.max_vruntime
1005242 Â 7% -23.6% 767543 Â17% sched_debug.cpu#0.sched_count
187491 Â 8% -15.6% 158259 Â10% cpuidle.C1E-ATM.time
476437 Â 4% -38.5% 293130 Â23% sched_debug.cpu#0.ttwu_local
507 Â 7% -11.0% 451 Â 3% sched_debug.cpu#2.load
667256 Â11% +39.5% 931026 Â21% sched_debug.cpu#2.sched_count
939243 Â 3% -31.0% 648447 Â16% sched_debug.cpu#0.nr_switches
311196 Â 2% +39.4% 433868 Â20% sched_debug.cpu#3.ttwu_count
599407 Â 4% +38.6% 830893 Â16% sched_debug.cpu#3.sched_count
569241 Â 0% +38.8% 789955 Â19% sched_debug.cpu#2.nr_switches
14160 Â 9% +31.6% 18630 Â 3% sched_debug.cpu#2.sched_goidle
2007 Â 0% +90.3% 3820 Â 7% meminfo.AnonHugePages
544571 Â 4% -31.9% 370869 Â17% sched_debug.cpu#0.ttwu_count
3536 Â 8% -14.8% 3013 Â 6% sched_debug.cpu#1.curr->pid
26536 Â 2% -38.5% 16313 Â 8% sched_debug.cpu#0.sched_goidle
59574 Â27% -39.0% 36314 Â27% cpuidle.C2-ATM.time
951369 Â 6% -17.7% 782526 Â18% sched_debug.cpu#1.sched_count
162 Â30% -43.4% 92 Â23% cpuidle.C2-ATM.usage
10 Â 8% +20.0% 12 Â 6% sched_debug.cfs_rq[2]:/.nr_spread_over
858 Â 3% +10.6% 949 Â 3% slabinfo.buffer_head.active_objs
858 Â 3% +10.6% 949 Â 3% slabinfo.buffer_head.num_objs
868 Â 4% +16.1% 1008 Â 3% slabinfo.kmalloc-96.active_objs
868 Â 4% +16.1% 1008 Â 3% slabinfo.kmalloc-96.num_objs
1209498 Â 1% -6.5% 1131376 Â 1% time.involuntary_context_switches
5138 Â 1% -2.5% 5011 Â 0% vmstat.system.cs

testbox/testcase/testparams: lkp-a04/netperf/900s-200%-TCP_MAERTS

71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
665275 Â 5% -14.5% 569138 Â 9% sched_debug.cpu#3.avg_idle
20642 Â 6% -56.4% 9003 Â40% sched_debug.cpu#3.sched_goidle
195414 Â 1% -31.4% 134127 Â33% cpuidle.C1E-ATM.time
23091 Â16% -40.2% 13807 Â30% sched_debug.cpu#2.sched_goidle
11208 Â 6% +77.5% 19895 Â14% sched_debug.cpu#1.sched_goidle
3459 Â 5% -11.9% 3047 Â 4% sched_debug.cpu#1.curr->pid
13303 Â18% +56.5% 20817 Â26% sched_debug.cpu#0.sched_goidle
4409 Â 1% -32.9% 2958 Â31% cpuidle.C1E-ATM.usage
32560 Â43% +91.7% 62409 Â 3% cpuidle.C4-ATM.time
45 Â25% +92.7% 88 Â17% cpuidle.C4-ATM.usage
3125 Â 1% +20.9% 3779 Â 7% sched_debug.cpu#3.curr->pid
1716 Â 3% +10.8% 1901 Â 7% slabinfo.kmalloc-192.active_objs
306953 Â 4% -23.3% 235502 Â20% cpuidle.C6-ATM.time
1.97 Â 1% +10.1% 2.17 Â 1% perf-profile.cpu-cycles.tcp_sendmsg.inet_sendmsg.sock_sendmsg.SYSC_sendto.sys_sendto

testbox/testcase/testparams: lkp-a04/netperf/300s-200%-10K-SCTP_STREAM_MANY

71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
0.90 Â27% +67.8% 1.51 Â31% sched_debug.cfs_rq[3]:/.spread
395 Â 9% +24.2% 491 Â 3% sched_debug.cfs_rq[3]:/.runnable_load_avg
2.00 Â28% -48.7% 1.03 Â17% sched_debug.cfs_rq[0]:/.spread
866 Â 8% -20.1% 692 Â12% sched_debug.cpu#2.load
346856 Â 5% +36.3% 472836 Â13% sched_debug.cpu#1.avg_idle
76913 Â19% -30.7% 53329 Â12% sched_debug.cpu#0.sched_goidle
87561 Â33% -44.4% 48649 Â22% meminfo.DirectMap4k
825954 Â 2% +14.6% 946706 Â 8% sched_debug.cpu#1.sched_count
327181 Â 1% +9.7% 358887 Â 2% sched_debug.cfs_rq[1]:/.min_vruntime
335128 Â 3% +9.1% 365583 Â 4% sched_debug.cfs_rq[0]:/.min_vruntime
77300 Â37% +50.5% 116358 Â18% sched_debug.cfs_rq[3]:/.MIN_vruntime
77301 Â37% +50.5% 116359 Â18% sched_debug.cfs_rq[3]:/.max_vruntime
572 Â 9% +31.3% 751 Â 4% sched_debug.cfs_rq[3]:/.load
16994 Â 0% -1.7% 16710 Â 1% vmstat.system.cs

testbox/testcase/testparams: lkp-snb01/hackbench/1600%-process-pipe

71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
156776 Â 1% +2.9% 161328 Â 0% hackbench.throughput
2333181.35 Â11% -100.0% 0.00 Â 0% sched_debug.cfs_rq[20]:/.max_vruntime
2333181.35 Â11% -100.0% 0.00 Â 0% sched_debug.cfs_rq[20]:/.MIN_vruntime
1852 Â34% -47.3% 977 Â40% numa-vmstat.node1.nr_inactive_anon
7403 Â34% -46.5% 3958 Â39% numa-meminfo.node1.Inactive(anon)
1134 Â27% +93.9% 2199 Â39% sched_debug.cfs_rq[21]:/.nr_spread_over
12900 Â20% +25.5% 16194 Â 9% numa-meminfo.node0.Inactive(anon)
3230 Â20% +24.1% 4008 Â 8% numa-vmstat.node0.nr_inactive_anon
58 Â14% -38.1% 36 Â16% sched_debug.cpu#12.load
51 Â 0% -24.7% 38 Â19% sched_debug.cpu#12.cpu_load[1]
52 Â 2% -23.7% 39 Â18% sched_debug.cpu#12.cpu_load[2]
43 Â 3% -19.4% 34 Â18% sched_debug.cfs_rq[12]:/.runnable_load_avg
57 Â19% -37.6% 36 Â21% sched_debug.cpu#26.load
17 Â13% +81.1% 32 Â24% sched_debug.cfs_rq[4]:/.load
863 Â 3% +16.3% 1004 Â 7% sched_debug.cfs_rq[5]:/.tg_load_contrib
31 Â13% +87.2% 58 Â 9% sched_debug.cpu#17.cpu_load[1]
30 Â12% +67.0% 50 Â 9% sched_debug.cpu#17.cpu_load[2]
834 Â 8% +22.7% 1023 Â14% sched_debug.cfs_rq[25]:/.blocked_load_avg
51 Â 9% -14.8% 44 Â 8% sched_debug.cpu#27.cpu_load[1]
52 Â 3% -14.6% 44 Â12% sched_debug.cpu#14.cpu_load[2]
51 Â 3% -20.9% 40 Â17% sched_debug.cpu#12.cpu_load[3]
53 Â 3% -20.6% 42 Â10% sched_debug.cpu#27.cpu_load[4]
49 Â 3% -18.4% 40 Â16% sched_debug.cpu#12.cpu_load[4]
43 Â 2% -19.2% 35 Â16% sched_debug.cfs_rq[14]:/.runnable_load_avg
878 Â 8% +21.7% 1069 Â12% sched_debug.cfs_rq[25]:/.tg_load_contrib
58 Â14% -31.0% 40 Â14% sched_debug.cpu#14.load
56 Â23% -36.1% 36 Â19% sched_debug.cfs_rq[26]:/.load
825 Â10% +25.1% 1032 Â10% sched_debug.cfs_rq[4]:/.tg_load_contrib
18 Â 5% +134.5% 43 Â21% sched_debug.cfs_rq[17]:/.runnable_load_avg
846 Â 3% +15.8% 979 Â 7% sched_debug.cfs_rq[5]:/.blocked_load_avg
17 Â 9% +45.1% 24 Â26% sched_debug.cfs_rq[0]:/.load
52 Â 2% -19.6% 42 Â13% sched_debug.cpu#27.cpu_load[2]
16 Â 5% +43.8% 23 Â28% sched_debug.cfs_rq[3]:/.runnable_load_avg
54 Â 1% -22.2% 42 Â11% sched_debug.cpu#27.cpu_load[3]
50 Â 3% -20.4% 40 Â16% sched_debug.cfs_rq[14]:/.load
44 Â 2% -21.6% 35 Â16% sched_debug.cfs_rq[11]:/.runnable_load_avg
20084 Â 4% -10.7% 17926 Â 6% sched_debug.cpu#5.curr->pid
1013 Â13% +22.7% 1242 Â10% sched_debug.cfs_rq[7]:/.tg_load_contrib
47 Â21% -32.2% 32 Â22% sched_debug.cpu#27.load
28 Â11% +60.5% 46 Â 9% sched_debug.cpu#17.cpu_load[3]
19 Â37% +42.1% 27 Â26% sched_debug.cpu#0.cpu_load[2]
16 Â 8% +60.4% 25 Â25% sched_debug.cpu#1.cpu_load[2]
39420050 Â 4% -12.6% 34437873 Â 5% sched_debug.cpu#9.ttwu_count
15 Â 9% +46.7% 22 Â29% sched_debug.cfs_rq[0]:/.runnable_load_avg
21495 Â 6% -15.4% 18194 Â 5% sched_debug.cpu#10.curr->pid
18 Â24% +44.4% 26 Â23% sched_debug.cpu#0.cpu_load[3]
1118 Â 7% +22.3% 1367 Â 9% sched_debug.cfs_rq[20]:/.blocked_load_avg
47 Â 6% -27.7% 34 Â26% sched_debug.cpu#25.load
4513 Â10% +63.0% 7356 Â40% numa-vmstat.node0.nr_kernel_stack
16 Â 8% +43.8% 23 Â28% sched_debug.cfs_rq[5]:/.runnable_load_avg
37 Â 8% -23.2% 28 Â14% sched_debug.cfs_rq[30]:/.runnable_load_avg
2002 Â 4% -92.2% 157 Â 8% cpuidle.POLL.usage
286004 Â22% +91.0% 546269 Â29% sched_debug.cpu#2.avg_idle
266606 Â 8% +57.9% 421009 Â31% sched_debug.cpu#3.avg_idle
16 Â10% +48.0% 24 Â26% sched_debug.cpu#1.cpu_load[3]
10084 Â 1% -12.5% 8827 Â 1% proc-vmstat.pgactivate
993 Â13% +22.6% 1217 Â10% sched_debug.cfs_rq[7]:/.blocked_load_avg
33083232 Â 2% -6.6% 30915131 Â 4% sched_debug.cpu#11.ttwu_count
44 Â 5% -22.4% 34 Â15% sched_debug.cfs_rq[10]:/.runnable_load_avg
27 Â20% +36.6% 37 Â12% sched_debug.cpu#18.cpu_load[2]
9531695 Â11% +43.5% 13678188 Â28% proc-vmstat.pgalloc_dma32
39 Â 3% -24.8% 29 Â19% sched_debug.cfs_rq[26]:/.runnable_load_avg
937152 Â12% +66.6% 1561391 Â 6% cpuidle.C3-SNB.time
17 Â17% +49.0% 25 Â22% sched_debug.cpu#0.cpu_load[4]
16 Â11% +49.0% 24 Â25% sched_debug.cpu#5.cpu_load[0]
15 Â10% +63.8% 25 Â25% sched_debug.cpu#1.cpu_load[1]
17 Â 8% +43.1% 24 Â28% sched_debug.cpu#1.cpu_load[4]
46 Â 8% -27.3% 33 Â28% sched_debug.cfs_rq[25]:/.load
71 Â10% -27.1% 52 Â29% sched_debug.cpu#15.cpu_load[0]
17408 Â 3% -14.0% 14970 Â 2% sched_debug.cpu#28.curr->pid
41 Â 9% -22.8% 31 Â15% sched_debug.cfs_rq[28]:/.runnable_load_avg
16 Â11% +65.3% 27 Â37% sched_debug.cpu#5.cpu_load[1]
45 Â11% -26.7% 33 Â 7% sched_debug.cfs_rq[28]:/.load
51 Â 5% -23.2% 39 Â19% sched_debug.cfs_rq[10]:/.load
53 Â10% -21.1% 42 Â 6% sched_debug.cpu#10.cpu_load[3]
49 Â10% +34.0% 65 Â16% sched_debug.cpu#25.cpu_load[0]
45 Â11% -27.2% 33 Â 9% sched_debug.cpu#28.load
54 Â18% -33.1% 36 Â16% sched_debug.cpu#9.load
50 Â 7% -19.7% 40 Â 9% sched_debug.cpu#10.cpu_load[4]
1153 Â 8% +21.0% 1396 Â 9% sched_debug.cfs_rq[20]:/.tg_load_contrib
42 Â 5% -20.5% 33 Â16% sched_debug.cfs_rq[15]:/.runnable_load_avg
17980 Â 5% -18.3% 14689 Â10% sched_debug.cpu#26.curr->pid
15 Â 9% +53.3% 23 Â22% sched_debug.cfs_rq[21]:/.load
17 Â 8% +56.9% 26 Â38% sched_debug.cpu#5.cpu_load[2]
15 Â10% +44.7% 22 Â29% sched_debug.cpu#1.cpu_load[0]
1 Â 0% +133.3% 2 Â20% sched_debug.cfs_rq[11]:/.nr_spread_over
56 Â 6% -14.3% 48 Â12% sched_debug.cpu#15.cpu_load[3]
18 Â 7% +46.3% 26 Â36% sched_debug.cpu#5.cpu_load[3]
16 Â 2% +42.0% 23 Â24% sched_debug.cpu#3.cpu_load[0]
65 Â 2% -19.8% 52 Â14% sched_debug.cpu#15.cpu_load[1]
52 Â 6% -14.6% 45 Â12% sched_debug.cpu#15.cpu_load[4]
55 Â 2% -12.1% 48 Â 7% sched_debug.cpu#11.load
18 Â 7% +42.6% 25 Â34% sched_debug.cpu#5.cpu_load[4]
0.47 Â 8% +101.4% 0.95 Â17% perf-profile.cpu-cycles._raw_spin_lock.try_to_wake_up.default_wake_function.autoremove_wake_function.__wake_up_common
15 Â10% +44.7% 22 Â29% sched_debug.cfs_rq[1]:/.runnable_load_avg
50 Â 6% -21.1% 40 Â17% sched_debug.cpu#10.load
26 Â 9% +57.5% 42 Â10% sched_debug.cpu#17.cpu_load[4]
60 Â 3% -14.9% 51 Â 8% sched_debug.cpu#15.cpu_load[2]
1170 Â 7% -21.7% 916 Â21% sched_debug.cfs_rq[28]:/.blocked_load_avg
18 Â12% +88.9% 34 Â43% sched_debug.cpu#6.load
18 Â10% +34.5% 24 Â21% sched_debug.cpu#3.cpu_load[3]
25 Â 9% +34.7% 33 Â19% sched_debug.cpu#18.cpu_load[4]
1216 Â 7% -21.8% 950 Â20% sched_debug.cfs_rq[28]:/.tg_load_contrib
26 Â12% +35.4% 35 Â14% sched_debug.cpu#18.cpu_load[3]
48104532 Â11% +45.0% 69756781 Â28% numa-numastat.node0.local_node
48104551 Â11% +45.0% 69758195 Â28% numa-numastat.node0.numa_hit
20 Â33% +54.1% 31 Â22% sched_debug.cpu#6.cpu_load[1]
808 Â10% +24.6% 1007 Â12% sched_debug.cfs_rq[4]:/.blocked_load_avg
48 Â 2% -14.4% 41 Â 9% sched_debug.cpu#8.cpu_load[4]
14 Â17% +51.2% 21 Â27% sched_debug.cfs_rq[16]:/.runnable_load_avg
27 Â26% +165.9% 72 Â16% sched_debug.cpu#17.cpu_load[0]
3.41 Â 0% +12.9% 3.85 Â 2% turbostat.%c7
15630 Â 2% -18.4% 12759 Â23% numa-vmstat.node1.nr_kernel_stack
267232 Â 8% +105.7% 549732 Â37% sched_debug.cpu#6.avg_idle
247641 Â18% +56.4% 387298 Â22% sched_debug.cpu#5.avg_idle
1.685e+08 Â 6% -14.4% 1.442e+08 Â 4% cpuidle.C1-SNB.time
0.79 Â 4% +54.4% 1.22 Â13% perf-profile.cpu-cycles._raw_spin_lock_irqsave.__wake_up_sync_key.pipe_write.do_sync_write.vfs_write
18 Â 6% +34.5% 24 Â24% sched_debug.cpu#3.cpu_load[4]
44 Â 4% -16.7% 36 Â 9% sched_debug.cfs_rq[8]:/.runnable_load_avg
47 Â 4% -14.0% 41 Â 8% sched_debug.cpu#9.cpu_load[4]
0.98 Â 3% -43.9% 0.55 Â33% perf-profile.cpu-cycles.llist_add_batch.try_to_wake_up.default_wake_function.autoremove_wake_function.__wake_up_common
45 Â 3% -22.8% 35 Â12% sched_debug.cfs_rq[9]:/.runnable_load_avg
19 Â20% +50.0% 29 Â25% sched_debug.cpu#6.cpu_load[2]
270052 Â19% +109.1% 564602 Â40% sched_debug.cpu#7.avg_idle
16 Â10% +50.0% 25 Â24% sched_debug.cfs_rq[6]:/.runnable_load_avg
18 Â13% +42.9% 26 Â22% sched_debug.cfs_rq[6]:/.load
1.27 Â 3% -43.5% 0.72 Â35% perf-profile.cpu-cycles.enqueue_task_fair.enqueue_task.activate_task.ttwu_do_activate.sched_ttwu_pending
79 Â13% +50.6% 119 Â34% sched_debug.cpu#0.nr_running
316918 Â22% +46.5% 464434 Â19% sched_debug.cpu#18.avg_idle
17 Â10% +48.1% 25 Â17% sched_debug.cpu#3.cpu_load[1]
284 Â 5% -29.3% 200 Â23% sched_debug.cpu#11.nr_running
271 Â 5% -30.1% 189 Â19% sched_debug.cpu#15.nr_running
51 Â23% -38.7% 31 Â 6% sched_debug.cfs_rq[31]:/.load
51 Â22% -38.6% 31 Â13% sched_debug.cpu#31.load
274345 Â 8% +44.0% 395193 Â18% sched_debug.cpu#19.avg_idle
19 Â10% +39.7% 27 Â26% sched_debug.cpu#6.cpu_load[4]
232 Â 9% -22.6% 180 Â20% sched_debug.cpu#31.nr_running
362 Â 8% +35.3% 490 Â 6% slabinfo.kmem_cache.active_objs
362 Â 8% +35.3% 490 Â 6% slabinfo.kmem_cache.num_objs
230 Â 5% -23.8% 175 Â19% sched_debug.cpu#28.nr_running
16860 Â 1% -21.6% 13224 Â 8% sched_debug.cpu#23.curr->pid
19 Â12% +42.4% 28 Â25% sched_debug.cpu#6.cpu_load[3]
297114 Â 8% +44.6% 429601 Â16% sched_debug.cpu#16.avg_idle
673 Â 7% -21.3% 530 Â10% slabinfo.blkdev_requests.active_objs
673 Â 7% -21.3% 530 Â10% slabinfo.blkdev_requests.num_objs
277 Â 6% -32.1% 188 Â21% sched_debug.cpu#10.nr_running
362 Â34% -43.9% 203 Â19% sched_debug.cpu#13.nr_running
249 Â 3% -34.7% 162 Â22% sched_debug.cpu#25.nr_running
18 Â12% +40.7% 25 Â18% sched_debug.cpu#3.cpu_load[2]
236 Â 5% -26.1% 174 Â24% sched_debug.cpu#27.nr_running
234 Â 7% -25.0% 176 Â22% sched_debug.cpu#24.nr_running
51 Â16% -42.5% 29 Â17% sched_debug.cpu#29.load
1.00 Â 4% -36.0% 0.64 Â29% perf-profile.cpu-cycles.mutex_lock.pipe_wait.pipe_read.do_sync_read.vfs_read
882415 Â14% -27.3% 641860 Â21% sched_debug.cpu#9.avg_idle
1.93 Â 5% -34.0% 1.28 Â24% perf-profile.cpu-cycles.effective_load.isra.38.select_task_rq_fair.try_to_wake_up.default_wake_function.autoremove_wake_function
2.21 Â 5% -11.3% 1.96 Â 4% turbostat.%c1
60 Â16% -34.8% 39 Â18% sched_debug.cfs_rq[11]:/.load
57 Â14% -32.2% 38 Â17% sched_debug.cfs_rq[13]:/.load
281 Â 1% -31.4% 192 Â18% sched_debug.cpu#12.nr_running
3.91 Â 3% -30.0% 2.73 Â20% perf-profile.cpu-cycles.idle_cpu.select_task_rq_fair.try_to_wake_up.default_wake_function.autoremove_wake_function
101299 Â 2% -17.5% 83592 Â20% numa-vmstat.node1.nr_slab_unreclaimable
471 Â 6% +27.1% 599 Â 5% slabinfo.kmem_cache_node.active_objs
742 Â 7% -20.8% 588 Â10% slabinfo.xfs_buf.num_objs
742 Â 7% -20.8% 588 Â10% slabinfo.xfs_buf.active_objs
280 Â 7% -26.4% 206 Â20% sched_debug.cpu#14.nr_running
490 Â 6% +26.1% 618 Â 4% slabinfo.kmem_cache_node.num_objs
235 Â 6% -27.4% 171 Â21% sched_debug.cpu#26.nr_running
64 Â 7% +58.8% 102 Â38% sched_debug.cpu#23.nr_running
17597 Â 5% -21.9% 13742 Â 5% sched_debug.cpu#29.curr->pid
1.53 Â 3% +36.5% 2.09 Â15% perf-profile.cpu-cycles.copy_user_generic_string.pipe_write.do_sync_write.vfs_write.sys_write
43 Â 9% -31.0% 29 Â20% sched_debug.cfs_rq[29]:/.load
0.79 Â 5% +34.2% 1.06 Â12% perf-profile.cpu-cycles.vfs_read.sys_read.system_call_fastpath.__read_nocancel
1.61 Â 2% -33.3% 1.08 Â23% perf-profile.cpu-cycles.try_to_wake_up.default_wake_function.autoremove_wake_function.__wake_up_common.__wake_up_sync_key
39 Â 5% -29.9% 27 Â22% sched_debug.cfs_rq[29]:/.runnable_load_avg
0.77 Â 8% +28.3% 0.98 Â13% perf-profile.cpu-cycles.selinux_file_permission.security_file_permission.rw_verify_area.vfs_read.sys_read
48 Â20% -34.9% 31 Â14% sched_debug.cfs_rq[24]:/.runnable_load_avg
155404 Â 0% +18.9% 184780 Â 0% softirqs.SCHED
3.37 Â 1% -20.6% 2.68 Â 6% perf-profile.cpu-cycles.mutex_unlock.do_sync_write.vfs_write.sys_write.system_call_fastpath
0.71 Â10% +41.3% 1.00 Â18% perf-profile.cpu-cycles.selinux_file_permission.security_file_permission.rw_verify_area.vfs_write.sys_write
288 Â 7% -27.1% 210 Â19% sched_debug.cpu#8.nr_running
1.03 Â 8% -32.0% 0.70 Â24% perf-profile.cpu-cycles._raw_spin_unlock_irqrestore.try_to_wake_up.default_wake_function.autoremove_wake_function.__wake_up_common
0.95 Â 5% +34.3% 1.28 Â15% perf-profile.cpu-cycles._raw_spin_unlock_irqrestore.__wake_up_sync_key.pipe_write.do_sync_write.vfs_write
829183 Â18% -23.4% 635360 Â12% sched_debug.cpu#24.avg_idle
33066 Â 7% +50.4% 49736 Â35% numa-vmstat.node0.nr_slab_unreclaimable
50 Â 6% -19.2% 40 Â13% sched_debug.cpu#31.cpu_load[2]
233 Â 6% -24.5% 176 Â21% sched_debug.cpu#29.nr_running
245 Â 6% -24.7% 185 Â20% sched_debug.cpu#9.nr_running
49 Â 4% -18.1% 40 Â13% sched_debug.cpu#31.cpu_load[3]
18064 Â 1% -21.3% 14220 Â 4% sched_debug.cpu#31.curr->pid
39 Â11% -27.4% 28 Â15% sched_debug.cfs_rq[31]:/.runnable_load_avg
44 Â 5% -19.4% 36 Â14% sched_debug.cfs_rq[13]:/.runnable_load_avg
0.98 Â 2% -20.7% 0.78 Â12% perf-profile.cpu-cycles.__wake_up_common.__wake_up_sync_key.pipe_write.do_sync_write.vfs_write
49911 Â 8% +32.5% 66126 Â27% numa-vmstat.node0.nr_page_table_pages
0.91 Â 6% +30.9% 1.19 Â13% perf-profile.cpu-cycles.__srcu_read_lock.fsnotify.vfs_write.sys_write.system_call_fastpath
0.85 Â 7% +24.4% 1.05 Â14% perf-profile.cpu-cycles.__sb_start_write.pipe_write.do_sync_write.vfs_write.sys_write
4.09 Â 1% -25.4% 3.05 Â17% perf-profile.cpu-cycles.__schedule.schedule.pipe_wait.pipe_read.do_sync_read
1.24 Â 2% +22.5% 1.52 Â11% perf-profile.cpu-cycles.avc_has_perm.inode_has_perm.file_has_perm.selinux_file_permission.security_file_permission
45 Â27% -35.0% 29 Â18% sched_debug.cpu#30.load
697427 Â 2% -22.7% 539034 Â20% sched_debug.cpu#29.avg_idle
1.59 Â10% +31.9% 2.09 Â15% perf-profile.cpu-cycles.pipe_write.do_sync_write.vfs_write.sys_write.system_call_fastpath
0.75 Â10% +31.9% 0.99 Â13% perf-profile.cpu-cycles.__fget_light.sys_write.system_call_fastpath.__write_nocancel
0.78 Â 4% +29.1% 1.01 Â16% perf-profile.cpu-cycles.__srcu_read_unlock.fsnotify.vfs_write.sys_write.system_call_fastpath
49 Â 3% -16.9% 41 Â15% sched_debug.cpu#31.cpu_load[4]
0.65 Â 7% +33.0% 0.86 Â15% perf-profile.cpu-cycles.__sb_end_write.pipe_write.do_sync_write.vfs_write.sys_write
0.69 Â 1% +23.1% 0.85 Â13% perf-profile.cpu-cycles.__write_nocancel
1.06 Â 3% -19.8% 0.85 Â15% perf-profile.cpu-cycles.enqueue_entity.enqueue_task_fair.enqueue_task.activate_task.ttwu_do_activate
41260 Â 4% -4.9% 39232 Â 5% sched_debug.cfs_rq[17]:/.tg_load_avg
1106 Â12% -11.2% 982 Â 5% numa-meminfo.node1.Unevictable
1103 Â12% -11.2% 979 Â 5% numa-meminfo.node1.Mlocked
228 Â 6% -21.4% 179 Â21% sched_debug.cpu#30.nr_running
2.41 Â 2% +18.0% 2.84 Â10% perf-profile.cpu-cycles.copy_user_generic_string.pipe_read.do_sync_read.vfs_read.sys_read
9444 Â 3% -6.5% 8831 Â 5% slabinfo.proc_inode_cache.active_objs
1.928e+09 Â 0% -2.6% 1.878e+09 Â 0% time.voluntary_context_switches
18293607 Â 1% +3.6% 18950807 Â 0% time.minor_page_faults
4628480 Â 1% -1.9% 4541031 Â 0% vmstat.system.cs
1057268 Â 0% -4.4% 1010922 Â 0% vmstat.system.in

testbox/testcase/testparams: lkp-snb01/hackbench/1600%-process-socket

71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
136352 Â 1% +5.3% 143641 Â 0% hackbench.throughput
3768 Â12% -27.4% 2735 Â 5% sched_debug.cfs_rq[27]:/.blocked_load_avg
29 Â26% +51.7% 45 Â11% sched_debug.cpu#24.cpu_load[3]
2752 Â22% -48.4% 1418 Â37% sched_debug.cfs_rq[14]:/.tg_load_contrib
18318 Â 9% -15.1% 15561 Â 1% sched_debug.cpu#25.curr->pid
25 Â35% +101.3% 51 Â32% sched_debug.cpu#2.cpu_load[1]
27 Â32% +53.0% 42 Â20% sched_debug.cpu#2.cpu_load[2]
1196 Â44% +92.2% 2299 Â30% sched_debug.cfs_rq[2]:/.blocked_load_avg
3822 Â11% -27.4% 2777 Â 5% sched_debug.cfs_rq[27]:/.tg_load_contrib
17 Â42% -55.8% 7 Â49% sched_debug.cpu#0.cpu_load[2]
62 Â42% -50.3% 31 Â17% sched_debug.cpu#10.cpu_load[0]
18 Â45% -58.2% 7 Â49% sched_debug.cpu#0.cpu_load[3]
2980 Â15% -56.5% 1296 Â30% sched_debug.cfs_rq[6]:/.tg_load_contrib
3104 Â26% -40.9% 1834 Â16% sched_debug.cfs_rq[20]:/.blocked_load_avg
14604 Â21% +25.8% 18369 Â 9% sched_debug.cpu#21.curr->pid
604711 Â26% +81.6% 1098097 Â19% sched_debug.cpu#2.avg_idle
1223 Â43% +94.5% 2379 Â29% sched_debug.cfs_rq[2]:/.tg_load_contrib
14 Â40% -46.5% 7 Â49% sched_debug.cpu#0.cpu_load[1]
2901 Â15% -57.6% 1230 Â32% sched_debug.cfs_rq[6]:/.blocked_load_avg
22133 Â 1% -26.0% 16387 Â20% sched_debug.cpu#11.curr->pid
1774944 Â 7% -25.7% 1319004 Â16% cpuidle.C3-SNB.time
17 Â42% -55.8% 7 Â49% sched_debug.cpu#0.cpu_load[4]
32 Â16% +319.6% 135 Â28% sched_debug.cpu#9.load
3173 Â26% -40.7% 1880 Â15% sched_debug.cfs_rq[20]:/.tg_load_contrib
29 Â48% +90.8% 55 Â 9% sched_debug.cpu#5.cpu_load[2]
122294 Â 3% +14.2% 139679 Â 4% numa-vmstat.node1.nr_active_anon
2719 Â22% -49.1% 1385 Â38% sched_debug.cfs_rq[14]:/.blocked_load_avg
26 Â36% +68.4% 44 Â28% sched_debug.cfs_rq[20]:/.runnable_load_avg
2 Â40% +133.3% 4 Â36% sched_debug.cfs_rq[29]:/.nr_spread_over
28 Â40% +71.4% 48 Â 6% sched_debug.cpu#5.cpu_load[3]
22 Â24% +126.9% 50 Â44% sched_debug.cfs_rq[24]:/.load
27 Â34% +55.6% 42 Â10% sched_debug.cpu#5.cpu_load[4]
57 Â16% -23.3% 44 Â 6% sched_debug.cpu#15.cpu_load[2]
3698 Â17% -44.1% 2066 Â36% sched_debug.cfs_rq[8]:/.blocked_load_avg
3736 Â17% -43.0% 2131 Â35% sched_debug.cfs_rq[8]:/.tg_load_contrib
1137 Â21% -38.1% 704 Â 1% slabinfo.kmalloc-192.num_slabs
1137 Â21% -38.1% 704 Â 1% slabinfo.kmalloc-192.active_slabs
26 Â44% +72.5% 46 Â30% sched_debug.cpu#24.cpu_load[1]
585607 Â37% +66.8% 976862 Â17% sched_debug.cpu#1.avg_idle
34949 Â26% -45.1% 19186 Â 1% slabinfo.task_struct.active_objs
34958 Â26% -45.1% 19205 Â 1% slabinfo.task_struct.num_objs
8739 Â26% -45.1% 4801 Â 1% slabinfo.task_struct.num_slabs
8739 Â26% -45.1% 4801 Â 1% slabinfo.task_struct.active_slabs
1.755e+08 Â 1% -38.7% 1.077e+08 Â14% cpuidle.C1E-SNB.time
17413 Â 9% +20.3% 20940 Â 2% sched_debug.cpu#1.curr->pid
1215 Â25% -42.3% 701 Â 0% slabinfo.signal_cache.num_slabs
1215 Â25% -42.3% 701 Â 0% slabinfo.signal_cache.active_slabs
36961 Â25% -40.4% 22031 Â 2% slabinfo.task_xstate.active_objs
948 Â25% -40.3% 566 Â 2% slabinfo.task_xstate.active_slabs
948 Â25% -40.3% 566 Â 2% slabinfo.task_xstate.num_slabs
36988 Â25% -40.2% 22106 Â 2% slabinfo.task_xstate.num_objs
67514 Â16% -19.2% 54578 Â 5% numa-vmstat.node0.nr_active_anon
28 Â42% +57.1% 44 Â14% sched_debug.cpu#24.cpu_load[2]
40258 Â24% -32.2% 27293 Â10% slabinfo.kmalloc-128.active_objs
629 Â24% -31.9% 428 Â10% slabinfo.kmalloc-128.active_slabs
629 Â24% -31.9% 428 Â10% slabinfo.kmalloc-128.num_slabs
40288 Â24% -31.9% 27449 Â10% slabinfo.kmalloc-128.num_objs
65 Â48% -63.6% 23 Â49% sched_debug.cpu#22.cpu_load[0]
38175 Â12% -57.3% 16317 Â 4% numa-vmstat.node1.nr_kernel_stack
31 Â11% +41.5% 44 Â14% sched_debug.cpu#24.cpu_load[4]
894920 Â37% +52.0% 1360209 Â13% sched_debug.cpu#5.avg_idle
3.965e+08 Â10% -44.7% 2.194e+08 Â10% cpuidle.C1-SNB.time
28 Â 5% +57.0% 45 Â16% sched_debug.cpu#9.cpu_load[0]
213890 Â28% -44.9% 117926 Â14% numa-meminfo.node1.KernelStack
93234 Â18% +79.4% 167249 Â18% slabinfo.kmalloc-512.active_objs
30 Â38% +50.0% 46 Â11% sched_debug.cpu#7.cpu_load[2]
36480 Â25% -42.3% 21059 Â 0% slabinfo.signal_cache.num_objs
36396 Â25% -42.3% 20991 Â 0% slabinfo.signal_cache.active_objs
2477866 Â23% -42.3% 1429384 Â25% sched_debug.cpu#23.avg_idle
20 Â22% +50.0% 31 Â13% sched_debug.cpu#11.nr_running
18 Â 7% +38.9% 25 Â 3% sched_debug.cpu#15.nr_running
15 Â29% +41.3% 21 Â 5% sched_debug.cpu#31.nr_running
298 Â10% +50.0% 448 Â11% slabinfo.kmem_cache.active_objs
298 Â10% +50.0% 448 Â11% slabinfo.kmem_cache.num_objs
5.746e+08 Â15% +49.7% 8.602e+08 Â 7% cpuidle.C7-SNB.time
29 Â42% +79.3% 52 Â28% sched_debug.cpu#7.cpu_load[1]
47797 Â21% -38.1% 29598 Â 1% slabinfo.kmalloc-192.num_objs
14154 Â42% +45.2% 20558 Â 3% sched_debug.cpu#3.curr->pid
14 Â26% +59.5% 22 Â28% sched_debug.cpu#19.nr_running
15030 Â11% +42.4% 21408 Â 2% sched_debug.cpu#18.curr->pid
121150 Â 3% +13.8% 137836 Â 4% numa-vmstat.node1.nr_anon_pages
18 Â25% +33.3% 24 Â 9% sched_debug.cpu#27.nr_running
17 Â14% +46.2% 25 Â11% sched_debug.cpu#24.nr_running
20 Â10% +23.0% 25 Â13% sched_debug.cpu#20.nr_running
148903 Â 4% +26.4% 188145 Â 9% cpuidle.C7-SNB.usage
253536 Â19% +52.7% 387249 Â 9% slabinfo.kmalloc-512.num_objs
3961 Â19% +52.7% 6050 Â 9% slabinfo.kmalloc-512.num_slabs
3961 Â19% +52.7% 6050 Â 9% slabinfo.kmalloc-512.active_slabs
17 Â29% +57.7% 27 Â 1% sched_debug.cpu#7.nr_running
207838 Â11% -37.8% 129252 Â 4% numa-vmstat.node1.nr_slab_unreclaimable
407 Â 7% +36.6% 557 Â 9% slabinfo.kmem_cache_node.active_objs
32 Â45% +60.2% 52 Â 5% sched_debug.cpu#31.cpu_load[1]
426 Â 7% +35.0% 576 Â 9% slabinfo.kmem_cache_node.num_objs
47738 Â21% -38.2% 29494 Â 1% slabinfo.kmalloc-192.active_objs
1956305 Â 7% -15.0% 1661919 Â 8% sched_debug.cpu#20.avg_idle
75011 Â12% -19.6% 60311 Â 1% slabinfo.kernfs_node_cache.active_objs
75036 Â12% -19.6% 60361 Â 1% slabinfo.kernfs_node_cache.num_objs
2084 Â13% -19.6% 1676 Â 1% slabinfo.kernfs_node_cache.active_slabs
2084 Â13% -19.6% 1676 Â 1% slabinfo.kernfs_node_cache.num_slabs
4049841 Â 2% +32.0% 5343918 Â14% cpuidle.C1-SNB.usage
293311 Â 3% +12.1% 328716 Â 1% softirqs.SCHED
683095 Â15% -31.3% 469359 Â15% numa-meminfo.node1.SUnreclaim
129912 Â21% +49.2% 193826 Â 9% slabinfo.kmalloc-256.active_objs
719915 Â15% -29.8% 505238 Â14% numa-meminfo.node1.Slab
94716 Â14% -19.5% 76235 Â19% sched_debug.cfs_rq[22]:/.tg_load_avg
4553 Â20% +39.9% 6370 Â 8% slabinfo.kmalloc-256.active_slabs
4553 Â20% +39.9% 6370 Â 8% slabinfo.kmalloc-256.num_slabs
66028 Â14% -17.7% 54316 Â 5% numa-vmstat.node0.nr_anon_pages
81070 Â 4% -17.4% 66948 Â11% sched_debug.cfs_rq[5]:/.tg_load_avg
291472 Â20% +39.9% 407746 Â 8% slabinfo.kmalloc-256.num_objs
71814 Â10% -17.8% 59021 Â10% sched_debug.cfs_rq[3]:/.tg_load_avg
78608 Â 2% -15.3% 66578 Â11% sched_debug.cfs_rq[4]:/.tg_load_avg
757843 Â 9% +9.1% 827114 Â 5% meminfo.AnonPages

testbox/testcase/testparams: lkp-sb03/nepim/300s-100%-udp6

71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
-327409 Â 0% +1.3% -331788 Â 0% sched_debug.cfs_rq[20]:/.spread0
1651 Â23% -81.4% 308 Â15% sched_debug.cpu#5.ttwu_local
1977 Â21% -86.5% 266 Â13% sched_debug.cpu#7.ttwu_local
2233 Â35% -75.3% 551 Â21% sched_debug.cpu#12.ttwu_local
-2 Â-20% +136.1% -5 Â-16% sched_debug.cpu#9.nr_uninterruptible
2279 Â11% -83.8% 368 Â 5% sched_debug.cpu#4.ttwu_local
1253 Â28% -73.0% 338 Â 4% sched_debug.cpu#6.ttwu_local
3305 Â19% -81.3% 616 Â34% sched_debug.cpu#10.ttwu_local
4201 Â27% -68.6% 1319 Â 7% sched_debug.cpu#12.ttwu_count
4965 Â40% -70.6% 1461 Â40% sched_debug.cpu#15.ttwu_count
-294566 Â-14% +13.0% -332791 Â 0% sched_debug.cfs_rq[8]:/.spread0
3498 Â24% -75.9% 844 Â 5% sched_debug.cpu#5.ttwu_count
2721 Â26% -89.2% 293 Â18% sched_debug.cpu#1.ttwu_local
3949 Â33% -78.7% 842 Â13% sched_debug.cpu#7.ttwu_count
2237 Â40% -80.0% 447 Â44% sched_debug.cpu#8.ttwu_local
4376 Â 9% -75.7% 1063 Â 5% sched_debug.cpu#4.ttwu_count
336 Â40% -38.3% 207 Â19% sched_debug.cfs_rq[27]:/.exec_clock
11571 Â26% -54.2% 5295 Â18% sched_debug.cpu#15.nr_switches
39 Â 7% +41.4% 56 Â 6% sched_debug.cpu#23.ttwu_local
5760 Â26% -54.8% 2603 Â18% sched_debug.cpu#15.sched_goidle
3848 Â38% -73.8% 1007 Â43% sched_debug.cpu#8.ttwu_count
2663 Â24% -57.5% 1133 Â38% sched_debug.cpu#6.ttwu_count
3354 Â23% -68.6% 1054 Â 8% sched_debug.cpu#11.ttwu_local
292 Â43% +99.5% 582 Â23% sched_debug.cpu#23.sched_goidle
6044 Â40% +65.4% 9999 Â24% sched_debug.cfs_rq[5]:/.min_vruntime
5173 Â26% -49.5% 2610 Â23% sched_debug.cpu#10.sched_goidle
5064 Â24% -69.1% 1562 Â27% sched_debug.cpu#1.ttwu_count
13939 Â17% -60.7% 5484 Â21% sched_debug.cpu#10.nr_switches
8052 Â21% -48.1% 4181 Â30% sched_debug.cpu#8.nr_switches
6978 Â11% -33.3% 4656 Â 9% sched_debug.cpu#6.nr_switches
4072 Â 9% -43.6% 2296 Â 7% sched_debug.cpu#5.sched_goidle
6458 Â14% -37.9% 4008 Â 5% sched_debug.cpu#12.sched_goidle
11124 Â 5% -19.5% 8960 Â 6% sched_debug.cpu#3.sched_goidle
90.84 Â 4% +95.6% 177.72 Â47% sched_debug.cfs_rq[30]:/.exec_clock
605 Â41% +99.8% 1210 Â22% sched_debug.cpu#23.nr_switches
3465 Â11% -34.0% 2286 Â10% sched_debug.cpu#6.sched_goidle
8984 Â14% -30.0% 6290 Â26% sched_debug.cpu#13.nr_switches
3925 Â21% -47.5% 2060 Â30% sched_debug.cpu#8.sched_goidle
7184 Â15% -40.3% 4290 Â 7% sched_debug.cpu#11.ttwu_count
886 Â 4% +17.7% 1043 Â 9% sched_debug.cpu#11.curr->pid
423855 Â49% +139.1% 1013399 Â19% cpuidle.C3-SNB.time
8204 Â 9% -42.9% 4688 Â 8% sched_debug.cpu#5.nr_switches
10570 Â24% +42.3% 15043 Â 5% sched_debug.cfs_rq[3]:/.min_vruntime
4455 Â14% -30.5% 3094 Â26% sched_debug.cpu#13.sched_goidle
6062 Â 3% -37.4% 3796 Â 8% sched_debug.cpu#4.sched_goidle
12202 Â 3% -37.0% 7686 Â 8% sched_debug.cpu#4.nr_switches
23602 Â 9% -24.6% 17787 Â 7% sched_debug.cpu#11.nr_switches
1614 Â 4% +27.9% 2064 Â25% sched_debug.cpu#28.sched_goidle
2634 Â 4% +106.3% 5435 Â 6% sched_debug.cpu#17.ttwu_count
22391 Â 5% -18.7% 18206 Â 6% sched_debug.cpu#3.nr_switches
1933 Â13% -17.0% 1605 Â 1% sched_debug.cpu#20.sched_goidle
31292 Â39% -37.3% 19629 Â14% sched_debug.cpu#11.sched_count
6957 Â 6% -41.0% 4101 Â 3% sched_debug.cpu#3.ttwu_count
13015 Â14% -35.8% 8358 Â 5% sched_debug.cpu#12.nr_switches
11715 Â 9% -25.5% 8728 Â 7% sched_debug.cpu#11.sched_goidle
745 Â 4% +57.6% 1174 Â41% sched_debug.cpu#28.ttwu_local
8103 Â 9% -41.0% 4780 Â21% sched_debug.cpu#7.nr_switches
5644 Â24% -37.9% 3502 Â 7% sched_debug.cpu#2.sched_goidle
4009 Â 8% -41.2% 2355 Â21% sched_debug.cpu#7.sched_goidle
5472 Â22% -43.5% 3089 Â18% sched_debug.cpu#1.sched_goidle
3223 Â13% -57.2% 1378 Â 7% sched_debug.cpu#3.ttwu_local
4652 Â16% -36.9% 2934 Â23% sched_debug.cfs_rq[27]:/.min_vruntime
69 Â19% +101.1% 140 Â10% sched_debug.cpu#29.ttwu_local
11110 Â21% -43.7% 6260 Â18% sched_debug.cpu#1.nr_switches
832 Â13% +97.3% 1641 Â 7% sched_debug.cpu#17.ttwu_local
5188 Â 6% +35.6% 7038 Â 3% sched_debug.cpu#17.nr_switches
4689 Â10% -13.7% 4045 Â 0% sched_debug.cpu#20.sched_count
5220 Â15% -19.4% 4209 Â10% sched_debug.cfs_rq[28]:/.min_vruntime
249 Â18% +35.0% 337 Â21% sched_debug.cpu#18.ttwu_local
53 Â17% +49.3% 80 Â 9% sched_debug.cpu#31.ttwu_local
455 Â 8% -30.5% 316 Â21% cpuidle.C3-SNB.usage
5275 Â32% -34.8% 3440 Â18% sched_debug.cfs_rq[20]:/.min_vruntime
3474 Â 7% -12.0% 3058 Â 6% cpuidle.C1E-SNB.usage
2169 Â 6% +15.7% 2510 Â 4% sched_debug.cpu#18.ttwu_count
139702 Â 1% -33.8% 92508 Â 1% cpuidle.C7-SNB.usage
406 Â 9% -18.6% 330 Â16% numa-vmstat.node1.nr_page_table_pages
4676 Â10% -13.8% 4033 Â 0% sched_debug.cpu#20.nr_switches
5780 Â 6% -18.1% 4736 Â 4% cpuidle.C1-SNB.usage
47898 Â 5% +32.0% 63244 Â 3% softirqs.SCHED
989252 Â 1% -9.3% 897324 Â 6% sched_debug.cpu#14.avg_idle
399538 Â 1% +11.7% 446206 Â 0% softirqs.TIMER
343 Â18% +12507.6% 43294 Â 2% time.involuntary_context_switches
2506 Â 0% -5.0% 2382 Â 0% vmstat.system.in

testbox/testcase/testparams: lkp-snb01/will-it-scale/open2

71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
253013 Â 0% -1.2% 249916 Â 0% will-it-scale.per_thread_ops
4354 Â41% -84.1% 692 Â20% sched_debug.cpu#13.ttwu_local
5296 Â18% -83.9% 853 Â13% sched_debug.cpu#12.ttwu_local
7641 Â38% -77.3% 1735 Â18% sched_debug.cpu#13.ttwu_count
9118 Â18% -77.2% 2074 Â11% sched_debug.cpu#12.ttwu_count
9904 Â 6% -48.4% 5115 Â34% sched_debug.cpu#10.ttwu_count
8839 Â27% -79.3% 1833 Â29% sched_debug.cpu#8.ttwu_local
101354 Â18% -27.7% 73268 Â19% sched_debug.cfs_rq[16]:/.exec_clock
24061 Â28% -61.1% 9371 Â29% sched_debug.cpu#15.nr_switches
11419 Â29% -73.9% 2977 Â 5% sched_debug.cpu#15.sched_goidle
14046 Â22% -78.0% 3091 Â19% sched_debug.cpu#8.ttwu_count
32344 Â20% -38.7% 19821 Â23% sched_debug.cpu#13.sched_count
5788 Â13% -77.0% 1331 Â 9% sched_debug.cpu#11.ttwu_local
10055 Â15% -40.7% 5957 Â21% sched_debug.cpu#10.sched_goidle
20203 Â18% -40.0% 12127 Â24% sched_debug.cpu#14.nr_switches
24515 Â 7% -30.7% 16990 Â21% sched_debug.cpu#10.nr_switches
27297 Â20% -59.1% 11172 Â16% sched_debug.cpu#8.nr_switches
75 Â30% -45.6% 41 Â48% sched_debug.cpu#16.cpu_load[1]
9222 Â21% -61.1% 3586 Â 7% sched_debug.cpu#14.sched_goidle
74 Â30% -45.1% 40 Â48% sched_debug.cpu#16.cpu_load[0]
10894 Â10% -49.7% 5474 Â 8% sched_debug.cpu#12.sched_goidle
71 Â31% -44.7% 39 Â31% sched_debug.cfs_rq[6]:/.blocked_load_avg
3515 Â19% -32.0% 2390 Â12% sched_debug.cpu#6.sched_goidle
20866 Â16% -58.6% 8634 Â10% sched_debug.cpu#13.nr_switches
11007 Â18% -64.8% 3878 Â17% sched_debug.cpu#8.sched_goidle
10057 Â11% -57.8% 4240 Â17% sched_debug.cpu#11.ttwu_count
37414 Â26% +186.5% 107204 Â15% sched_debug.cpu#0.sched_count
18 Â16% +34.4% 24 Â23% sched_debug.cfs_rq[26]:/.runnable_load_avg
1384240 Â40% -67.3% 452908 Â22% cpuidle.C3-SNB.time
9831 Â17% -62.7% 3666 Â12% sched_debug.cpu#13.sched_goidle
7050 Â19% -50.0% 3527 Â11% sched_debug.cpu#9.sched_goidle
27447 Â13% -32.1% 18629 Â13% sched_debug.cpu#11.nr_switches
5961 Â19% +315.5% 24768 Â24% sched_debug.cpu#0.ttwu_local
4277 Â13% +132.7% 9953 Â26% sched_debug.cpu#17.ttwu_count
17147 Â14% +387.7% 83620 Â21% sched_debug.cpu#0.nr_switches
23029 Â10% -45.8% 12486 Â 8% sched_debug.cpu#12.nr_switches
11920 Â11% -29.5% 8400 Â15% sched_debug.cpu#11.sched_goidle
80 Â12% -51.0% 39 Â44% sched_debug.cfs_rq[16]:/.load
61 Â37% -47.7% 32 Â42% sched_debug.cpu#10.load
13026 Â18% +265.2% 47568 Â22% sched_debug.cpu#0.ttwu_count
80 Â13% -49.6% 40 Â43% sched_debug.cpu#16.load
1445 Â31% +98.2% 2863 Â39% sched_debug.cpu#17.ttwu_local
7763 Â17% +76.4% 13693 Â35% sched_debug.cpu#17.nr_switches
810769 Â 3% +12.5% 912189 Â 2% sched_debug.cpu#1.avg_idle
74 Â30% -46.8% 39 Â44% sched_debug.cfs_rq[16]:/.runnable_load_avg
1120 Â16% -41.9% 651 Â18% cpuidle.C3-SNB.usage
92087 Â20% +30.7% 120342 Â10% sched_debug.cpu#0.nr_load_updates
914687 Â 4% -10.4% 819316 Â 5% sched_debug.cpu#17.avg_idle
2080 Â 2% -9.8% 1877 Â 1% slabinfo.signal_cache.num_objs
2050 Â 4% -8.4% 1877 Â 1% slabinfo.signal_cache.active_objs
2175 Â13% +71.2% 3725 Â26% sched_debug.cpu#18.ttwu_count
335319 Â 0% -21.0% 265024 Â 0% cpuidle.C7-SNB.usage
18870 Â 0% +265.8% 69025 Â 0% time.involuntary_context_switches
1981 Â 0% +7.1% 2122 Â 0% vmstat.system.cs
18240 Â 0% -1.3% 18004 Â 0% vmstat.system.in

testbox/testcase/testparams: lkp-sb03/nuttcp/300s

71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
3883.97 Â35% -100.0% 0.00 Â 0% sched_debug.cfs_rq[5]:/.max_vruntime
3883.97 Â35% -100.0% 0.00 Â 0% sched_debug.cfs_rq[5]:/.MIN_vruntime
8618 Â29% -74.5% 2195 Â28% sched_debug.cpu#14.ttwu_count
2971 Â48% -88.0% 357 Â39% sched_debug.cpu#5.ttwu_local
549 Â22% +442.9% 2984 Â33% sched_debug.cpu#21.nr_switches
1789 Â46% -75.4% 440 Â49% sched_debug.cpu#4.ttwu_local
7097 Â30% -71.8% 2000 Â33% sched_debug.cpu#15.ttwu_local
5737 Â10% -92.0% 458 Â25% sched_debug.cpu#10.ttwu_local
8862 Â26% -66.2% 2994 Â12% sched_debug.cpu#15.ttwu_count
63.34 Â10% +391.1% 311.04 Â43% sched_debug.cfs_rq[23]:/.exec_clock
10 Â12% -38.7% 6 Â37% sched_debug.cfs_rq[17]:/.nr_spread_over
1828 Â22% -38.2% 1130 Â36% sched_debug.cfs_rq[11]:/.exec_clock
562 Â22% +433.1% 2997 Â33% sched_debug.cpu#21.sched_count
63 Â38% -66.7% 21 Â43% sched_debug.cfs_rq[2]:/.tg_runnable_contrib
1523 Â29% -58.6% 630 Â30% sched_debug.cpu#2.ttwu_local
46 Â20% +344.6% 206 Â26% sched_debug.cpu#23.ttwu_local
11 Â44% -100.0% 0 Â 0% sched_debug.cpu#29.cpu_load[4]
444 Â10% +511.9% 2722 Â48% sched_debug.cfs_rq[3]:/.exec_clock
3428 Â 4% +43.4% 4917 Â 5% sched_debug.cpu#25.ttwu_count
10 Â 0% -100.0% 0 Â 0% sched_debug.cfs_rq[5]:/.blocked_load_avg
60 Â26% +637.6% 445 Â21% sched_debug.cpu#21.ttwu_local
1044 Â19% +57.7% 1647 Â 7% sched_debug.cpu#25.ttwu_local
2923 Â37% -65.6% 1006 Â40% sched_debug.cfs_rq[2]:/.avg->runnable_avg_sum
9 Â39% +159.3% 23 Â30% sched_debug.cpu#0.cpu_load[3]
193 Â34% -100.0% 0 Â 0% sched_debug.cpu#28.cpu_load[0]
11 Â16% -52.9% 5 Â23% sched_debug.cfs_rq[26]:/.tg_runnable_contrib
2 Â40% -100.0% 0 Â 0% sched_debug.cfs_rq[9]:/.tg_runnable_contrib
114 Â31% -83.3% 19 Â17% sched_debug.cfs_rq[9]:/.avg->runnable_avg_sum
550 Â13% -54.9% 248 Â22% sched_debug.cfs_rq[26]:/.avg->runnable_avg_sum
10 Â22% +140.6% 25 Â45% sched_debug.cpu#0.cpu_load[4]
4891 Â33% -54.9% 2206 Â49% sched_debug.cfs_rq[10]:/.exec_clock
3279 Â40% -100.0% 0 Â 0% sched_debug.cpu#28.curr->pid
196 Â35% -100.0% 0 Â 0% sched_debug.cfs_rq[28]:/.runnable_load_avg
2481 Â 5% -43.3% 1406 Â32% sched_debug.cfs_rq[31]:/.exec_clock
196 Â35% -100.0% 0 Â 0% sched_debug.cfs_rq[28]:/.load
3.55e+08 Â 0% +8.9% 3.865e+08 Â 5% proc-vmstat.pgalloc_normal
2357 Â 7% +103.2% 4790 Â 2% sched_debug.cpu#17.ttwu_count
196 Â35% -100.0% 0 Â 0% sched_debug.cpu#28.load
2705 Â20% -42.9% 1544 Â49% sched_debug.cpu#31.nr_switches
175 Â37% +89.2% 332 Â48% sched_debug.cfs_rq[21]:/.avg->runnable_avg_sum
1186 Â19% -43.7% 668 Â46% sched_debug.cpu#31.sched_goidle
3161 Â27% -38.0% 1959 Â 0% meminfo.AnonHugePages
349 Â42% +131.2% 808 Â37% sched_debug.cpu#3.ttwu_local
6468 Â 9% +30.4% 8433 Â19% sched_debug.cpu#0.ttwu_count
866 Â13% +91.2% 1656 Â 4% sched_debug.cpu#17.ttwu_local
6038 Â 7% -34.0% 3986 Â30% sched_debug.cfs_rq[28]:/.min_vruntime
5313 Â 9% -40.4% 3169 Â36% sched_debug.cfs_rq[31]:/.min_vruntime
1862 Â 4% +54.5% 2878 Â21% sched_debug.cfs_rq[23]:/.min_vruntime
5098 Â 4% -33.3% 3402 Â 6% sched_debug.cfs_rq[29]:/.min_vruntime
126798 Â15% -28.1% 91178 Â 0% sched_debug.cpu#4.nr_load_updates
112718 Â10% -19.7% 90480 Â 1% sched_debug.cpu#5.nr_load_updates
470584 Â47% +101.6% 948801 Â 2% sched_debug.cpu#5.avg_idle
26143 Â 2% +66.8% 43614 Â 7% softirqs.RCU
959521 Â 4% -35.0% 623768 Â41% sched_debug.cpu#13.avg_idle
967558 Â 2% -19.5% 778607 Â19% sched_debug.cpu#15.avg_idle
325196 Â 0% -26.0% 240605 Â 0% cpuidle.C7-SNB.usage
662043 Â14% +46.9% 972290 Â 2% sched_debug.cpu#4.avg_idle
31 Â34% -32.3% 21 Â14% sched_debug.cfs_rq[25]:/.tg_runnable_contrib
1466 Â33% -32.1% 995 Â14% sched_debug.cfs_rq[25]:/.avg->runnable_avg_sum
10900 Â 3% +18.2% 12884 Â 7% slabinfo.kmalloc-256.num_objs
308 Â 0% +6197.7% 19418 Â 0% time.involuntary_context_switches
0.00 Â 0% +4.9% 0.00 Â 3% energy.energy-cores
41.80 Â 0% +4.9% 43.85 Â 3% turbostat.Cor_W
69.29 Â 0% +3.0% 71.34 Â 1% turbostat.Pkg_W
0.00 Â 0% +3.0% 0.00 Â 1% energy.energy-pkg

testbox/testcase/testparams: xps2/pigz/100%-512K

71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
797 Â19% +73.4% 1382 Â 7% sched_debug.cfs_rq[6]:/.tg_load_contrib
679 Â24% +83.1% 1244 Â 7% sched_debug.cfs_rq[6]:/.blocked_load_avg
135 Â 9% -10.6% 121 Â10% sched_debug.cpu#7.load
421 Â 3% -30.3% 294 Â 7% sched_debug.cfs_rq[1]:/.tg_load_contrib
306 Â 5% -51.2% 149 Â20% sched_debug.cfs_rq[1]:/.blocked_load_avg
2002 Â 4% +36.3% 2729 Â11% sched_debug.cpu#7.sched_goidle
503763 Â15% +24.2% 625908 Â13% sched_debug.cpu#5.avg_idle
22573 Â23% +30.5% 29451 Â14% cpuidle.C1-NHM.time
101 Â 8% +25.3% 127 Â 3% cpuidle.C1-NHM.usage
5399 Â 1% +19.8% 6469 Â 9% sched_debug.cfs_rq[0]:/.tg_load_avg
980 Â 2% +12.9% 1106 Â 3% slabinfo.kmalloc-96.active_objs
980 Â 2% +12.9% 1106 Â 3% slabinfo.kmalloc-96.num_objs
5425 Â 1% +19.8% 6500 Â 9% sched_debug.cfs_rq[1]:/.tg_load_avg
5422 Â 1% +19.4% 6474 Â 9% sched_debug.cfs_rq[5]:/.tg_load_avg
2151 Â 1% -10.1% 1935 Â 4% slabinfo.kmalloc-256.num_objs
5416 Â 1% +19.6% 6479 Â 8% sched_debug.cfs_rq[3]:/.tg_load_avg
5431 Â 1% +19.4% 6486 Â 9% sched_debug.cfs_rq[6]:/.tg_load_avg
5402 Â 2% +19.7% 6464 Â 8% sched_debug.cfs_rq[2]:/.tg_load_avg
5423 Â 0% +19.5% 6478 Â 9% sched_debug.cfs_rq[7]:/.tg_load_avg
5429 Â 1% +19.6% 6495 Â 8% sched_debug.cfs_rq[4]:/.tg_load_avg

testbox/testcase/testparams: xps/ftrace_onoff/5m

71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
0.00 Â 0% +8.1e+10% 813.10 Â32% sched_debug.cfs_rq[0]:/.MIN_vruntime
0.00 Â 0% +8.1e+10% 813.23 Â32% sched_debug.cfs_rq[0]:/.max_vruntime
-21 Â-48% +130.8% -50 Â-17% sched_debug.cpu#3.nr_uninterruptible
1754 Â 5% +88.5% 3308 Â 9% sched_debug.cfs_rq[5]:/.spread0
2504 Â25% +100.0% 5009 Â 6% sched_debug.cfs_rq[6]:/.spread0
11853 Â25% +113.3% 25277 Â 5% sched_debug.cpu#5.ttwu_local
2544 Â 6% +126.2% 5753 Â 8% sched_debug.cfs_rq[4]:/.spread0
1809 Â33% +86.1% 3367 Â15% sched_debug.cfs_rq[7]:/.spread0
14455 Â 2% +111.4% 30556 Â 7% sched_debug.cpu#7.ttwu_local
-1841 Â 0% +60.0% -2945 Â 0% sched_debug.cpu#2.nr_uninterruptible
-11 Â-21% +282.9% -44 Â-14% sched_debug.cpu#1.nr_uninterruptible
532 Â10% +78.4% 949 Â 4% sched_debug.cpu#4.nr_uninterruptible
29707 Â10% +47.3% 43756 Â12% sched_debug.cpu#4.ttwu_local
24165 Â11% +39.9% 33805 Â13% sched_debug.cpu#6.ttwu_local
60 Â10% +55.5% 94 Â17% sched_debug.cfs_rq[3]:/.tg_runnable_contrib
2840 Â 9% +54.2% 4380 Â16% sched_debug.cfs_rq[3]:/.avg->runnable_avg_sum
28 Â46% +101.2% 57 Â16% sched_debug.cpu#4.cpu_load[4]
3 Â14% -70.0% 1 Â 0% sched_debug.cfs_rq[7]:/.nr_spread_over
74392 Â 8% +45.5% 108276 Â 7% sched_debug.cpu#6.sched_count
553 Â 5% +60.1% 886 Â 3% sched_debug.cpu#5.nr_uninterruptible
21753 Â12% +105.0% 44597 Â 4% sched_debug.cpu#5.ttwu_count
25413 Â 0% +107.6% 52755 Â 5% sched_debug.cpu#7.ttwu_count
5262 Â 4% +37.6% 7243 Â 2% sched_debug.cfs_rq[5]:/.exec_clock
4733 Â 7% +25.4% 5936 Â 3% sched_debug.cfs_rq[6]:/.avg->runnable_avg_sum
85 Â29% +56.8% 134 Â15% sched_debug.cfs_rq[4]:/.tg_runnable_contrib
3972 Â29% +56.2% 6203 Â15% sched_debug.cfs_rq[4]:/.avg->runnable_avg_sum
11046 Â 1% +38.8% 15330 Â 1% sched_debug.cfs_rq[4]:/.min_vruntime
41713 Â 7% +71.2% 71395 Â 8% sched_debug.cpu#4.ttwu_count
1550134 Â 0% +47.4% 2285060 Â 0% proc-vmstat.numa_local
1550134 Â 0% +47.4% 2285060 Â 0% proc-vmstat.numa_hit
1561481 Â 0% +47.3% 2300370 Â 0% proc-vmstat.pgfree
58.66 Â 0% -26.2% 43.29 Â 0% turbostat.%c6
137258 Â 5% -21.6% 107598 Â 0% sched_debug.cpu#2.ttwu_local
102 Â 8% +25.1% 128 Â 3% sched_debug.cfs_rq[6]:/.tg_runnable_contrib
9 Â48% -65.5% 3 Â14% sched_debug.cfs_rq[5]:/.nr_spread_over
3 Â14% -60.0% 1 Â35% sched_debug.cfs_rq[1]:/.nr_spread_over
0 Â 0% +Inf% 1 Â 0% vmstat.procs.r
52586 Â27% +179.8% 147141 Â44% cpuidle.POLL.time
1617673 Â 0% +41.4% 2288012 Â 0% proc-vmstat.pgfault
5885 Â 2% +49.6% 8807 Â 1% sched_debug.cfs_rq[4]:/.exec_clock
34481 Â 9% +71.3% 59084 Â 7% sched_debug.cpu#6.ttwu_count
1213 Â10% +28.5% 1559 Â11% sched_debug.cfs_rq[4]:/.tg_load_contrib
10257 Â 0% +25.6% 12885 Â 1% sched_debug.cfs_rq[5]:/.min_vruntime
48767 Â12% +72.0% 83904 Â 3% sched_debug.cpu#5.sched_count
83323 Â 7% +46.5% 122037 Â 8% sched_debug.cpu#4.sched_count
471 Â 6% +71.7% 810 Â 6% sched_debug.cpu#6.nr_uninterruptible
168963 Â 4% -23.4% 129367 Â 0% sched_debug.cpu#2.ttwu_count
74340 Â 8% +45.6% 108231 Â 7% sched_debug.cpu#6.nr_switches
1 Â28% +1560.0% 27 Â49% sched_debug.cpu#1.cpu_load[2]
21505 Â13% +67.3% 35987 Â 4% sched_debug.cpu#5.sched_goidle
55 Â28% +143.6% 134 Â14% sched_debug.cfs_rq[0]:/.blocked_load_avg
2987 Â 0% +49.9% 4477 Â 0% cpuidle.POLL.usage
216974 Â 1% -16.2% 181870 Â 9% sched_debug.cpu#3.sched_goidle
34261 Â 8% +39.4% 47774 Â 8% sched_debug.cpu#6.sched_goidle
286 Â13% +32.8% 380 Â 1% sched_debug.cpu#7.nr_uninterruptible
3.98 Â 1% +49.0% 5.92 Â 1% turbostat.%c3
1164419 Â 0% +47.6% 1718674 Â 0% proc-vmstat.pgalloc_dma32
48716 Â12% +72.1% 83860 Â 3% sched_debug.cpu#5.nr_switches
5676 Â 5% +12.8% 6401 Â 3% sched_debug.cfs_rq[3]:/.min_vruntime
37862 Â 8% +40.1% 53053 Â10% sched_debug.cpu#4.sched_goidle
83271 Â 7% +46.5% 121990 Â 8% sched_debug.cpu#4.nr_switches
398812 Â 0% +46.4% 583757 Â 0% proc-vmstat.pgalloc_normal
334712 Â 4% -18.3% 273483 Â 0% sched_debug.cpu#2.sched_count
437235 Â 1% -15.5% 369670 Â 8% sched_debug.cpu#3.nr_switches
11007 Â 5% +32.5% 14585 Â 2% sched_debug.cfs_rq[6]:/.min_vruntime
45 Â11% +92.6% 87 Â 9% sched_debug.cfs_rq[1]:/.tg_runnable_contrib
2133 Â10% +89.3% 4039 Â 9% sched_debug.cfs_rq[1]:/.avg->runnable_avg_sum
5686 Â 5% +45.1% 8248 Â 2% sched_debug.cfs_rq[6]:/.exec_clock
59 Â16% +169.8% 161 Â13% sched_debug.cfs_rq[0]:/.tg_load_contrib
217819 Â 1% -18.2% 178157 Â 9% sched_debug.cpu#3.ttwu_count
437287 Â 1% -15.4% 369733 Â 8% sched_debug.cpu#3.sched_count
30.42 Â 1% -50.3% 15.11 Â 2% turbostat.%pc6
115697 Â 0% +100.0% 231357 Â 1% cpuidle.C3-NHM.usage
73 Â23% +133.3% 170 Â25% sched_debug.cfs_rq[1]:/.tg_load_contrib
68 Â22% +127.0% 154 Â29% sched_debug.cfs_rq[1]:/.blocked_load_avg
40 Â26% +59.5% 64 Â17% sched_debug.cpu#5.cpu_load[3]
54634 Â 1% +79.7% 98153 Â 5% sched_debug.cpu#7.nr_switches
334661 Â 4% -18.3% 273419 Â 0% sched_debug.cpu#2.nr_switches
165837 Â 4% -19.2% 134051 Â 0% sched_debug.cpu#2.sched_goidle
24575 Â 1% +75.9% 43216 Â 5% sched_debug.cpu#7.sched_goidle
27 Â13% +65.9% 45 Â20% sched_debug.cpu#5.cpu_load[4]
189542 Â 1% -16.6% 158027 Â10% sched_debug.cpu#3.ttwu_local
54684 Â 1% +79.6% 98192 Â 5% sched_debug.cpu#7.sched_count
31471 Â 0% +66.7% 52466 Â 0% sched_debug.cpu#7.nr_load_updates
1120 Â19% +28.5% 1439 Â11% sched_debug.cfs_rq[4]:/.blocked_load_avg
32395 Â 1% +71.5% 55564 Â 0% sched_debug.cpu#6.nr_load_updates
3844 Â 0% +58.3% 6085 Â 0% proc-vmstat.nr_written
5966 Â 0% +14.8% 6851 Â 5% sched_debug.cfs_rq[1]:/.min_vruntime
34252 Â 1% +68.6% 57744 Â 0% sched_debug.cpu#4.nr_load_updates
30594 Â 2% +64.6% 50355 Â 1% sched_debug.cpu#5.nr_load_updates
3839 Â 0% +58.5% 6085 Â 0% proc-vmstat.nr_dirtied
96594258 Â 0% +48.5% 1.434e+08 Â 1% cpuidle.C3-NHM.time
8501 Â 1% +12.6% 9575 Â 2% sched_debug.cfs_rq[0]:/.min_vruntime
5973 Â 5% +19.1% 7112 Â 0% sched_debug.cfs_rq[2]:/.min_vruntime
10312 Â 6% +25.5% 12945 Â 2% sched_debug.cfs_rq[7]:/.min_vruntime
4.016e+08 Â 0% +58.7% 6.375e+08 Â 0% cpuidle.C1-NHM.time
1074730 Â 0% +58.5% 1702978 Â 0% cpuidle.C1-NHM.usage
35 Â 2% +55.7% 55 Â24% sched_debug.cpu#6.cpu_load[4]
5205 Â 4% +32.0% 6872 Â 2% sched_debug.cfs_rq[7]:/.exec_clock
11362 Â 5% +38.1% 15693 Â 2% softirqs.NET_RX
30.76 Â 0% +35.7% 41.76 Â 0% turbostat.%c1
6215 Â 4% +18.2% 7347 Â 5% sched_debug.cfs_rq[0]:/.tg_load_avg
559 Â 4% +37.5% 769 Â10% sched_debug.cfs_rq[0]:/.tg->runnable_avg
579 Â 4% +37.1% 794 Â 9% sched_debug.cfs_rq[1]:/.tg->runnable_avg
588 Â 3% +36.9% 806 Â10% sched_debug.cfs_rq[2]:/.tg->runnable_avg
622 Â 3% +36.2% 848 Â10% sched_debug.cfs_rq[5]:/.tg->runnable_avg
1.714e+09 Â 0% -20.2% 1.368e+09 Â 0% cpuidle.C6-NHM.time
599 Â 3% +36.0% 815 Â10% sched_debug.cfs_rq[3]:/.tg->runnable_avg
615 Â 3% +35.7% 835 Â10% sched_debug.cfs_rq[4]:/.tg->runnable_avg
635 Â 2% +36.0% 863 Â 9% sched_debug.cfs_rq[6]:/.tg->runnable_avg
646 Â 2% +35.3% 874 Â 9% sched_debug.cfs_rq[7]:/.tg->runnable_avg
6295 Â 4% +16.7% 7347 Â 4% sched_debug.cfs_rq[1]:/.tg_load_avg
6407 Â 4% +17.1% 7501 Â 4% sched_debug.cfs_rq[5]:/.tg_load_avg
6323 Â 4% +18.1% 7465 Â 4% sched_debug.cfs_rq[3]:/.tg_load_avg
6425 Â 4% +16.5% 7487 Â 3% sched_debug.cfs_rq[6]:/.tg_load_avg
6332 Â 4% +17.2% 7418 Â 5% sched_debug.cfs_rq[2]:/.tg_load_avg
6417 Â 4% +16.1% 7452 Â 4% sched_debug.cfs_rq[7]:/.tg_load_avg
4.23 Â 2% +15.9% 4.91 Â 3% turbostat.%pc3
6378 Â 4% +17.3% 7480 Â 5% sched_debug.cfs_rq[4]:/.tg_load_avg
56450565 Â 0% +13.2% 63882668 Â 1% cpuidle.C1E-NHM.time
101678 Â 0% +12.9% 114794 Â 0% softirqs.TIMER
47 Â 3% +8.5% 51 Â 4% turbostat.CTMP
20813 Â 0% +95.2% 40618 Â 0% time.involuntary_context_switches
30496 Â 0% +58.7% 48386 Â 0% time.file_system_outputs
55307 Â 0% +58.9% 87877 Â 0% time.voluntary_context_switches
1155035 Â 0% +58.7% 1833059 Â 0% time.minor_page_faults
6.60 Â 0% +36.9% 9.03 Â 0% turbostat.%c0
11 Â 0% +45.5% 16 Â 0% time.percent_of_cpu_this_job_got
31.28 Â 0% +39.9% 43.76 Â 0% time.system_time
8112 Â 0% +3.3% 8378 Â 0% vmstat.system.cs
7738 Â 0% +36.5% 10566 Â 0% vmstat.system.in

testbox/testcase/testparams: lkp-a06/qperf/600s

71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
75582 Â 2% -3.1% 73228 Â 0% qperf.sctp.latency
6.479e+08 Â 0% -0.9% 6.42e+08 Â 0% qperf.udp.recv_bw
6.561e+08 Â 0% -1.3% 6.477e+08 Â 0% qperf.udp.send_bw
-148722 Â-11% +44.8% -215280 Â-15% sched_debug.cfs_rq[3]:/.spread0
196852 Â31% -30.1% 137556 Â 4% sched_debug.cpu#1.ttwu_local
16973 Â 4% -9.2% 15414 Â 3% cpuidle.POLL.time
3.30 Â39% -61.6% 1.27 Â17% perf-profile.cpu-cycles.copy_user_generic_string.skb_copy_datagram_iovec.tcp_recvmsg.inet_recvmsg.sock_aio_read
197 Â 1% -6.9% 183 Â 6% sched_debug.cpu#2.cpu_load[3]
329 Â 5% -13.2% 285 Â 9% sched_debug.cfs_rq[2]:/.load
422 Â18% -23.4% 323 Â20% sched_debug.cfs_rq[3]:/.blocked_load_avg
624 Â13% -16.3% 522 Â10% sched_debug.cfs_rq[3]:/.tg_load_contrib
3715157 Â 2% +11.7% 4149477 Â 0% sched_debug.cpu#0.sched_count
323 Â 3% -13.4% 280 Â 6% sched_debug.cpu#2.load
186 Â 8% +12.4% 209 Â 6% sched_debug.cpu#0.cpu_load[0]
353 Â 7% -20.4% 281 Â 9% sched_debug.cfs_rq[1]:/.load
457483 Â 9% +20.2% 549887 Â 8% sched_debug.cpu#1.avg_idle
100532 Â18% -40.7% 59572 Â17% meminfo.DirectMap4k
246 Â 7% -12.2% 216 Â 2% sched_debug.cfs_rq[2]:/.runnable_load_avg
84774 Â 4% -23.5% 64863 Â 4% softirqs.RCU
16026 Â 5% -16.3% 13414 Â 4% cpuidle.C2-ATM.usage
9358028 Â 6% -17.8% 7692487 Â 6% cpuidle.C4-ATM.time
5313 Â 7% -25.2% 3972 Â10% cpuidle.C4-ATM.usage
1769 Â 2% +14.6% 2027 Â 6% slabinfo.kmalloc-192.active_objs
85994 Â 1% -14.2% 73762 Â 1% softirqs.SCHED
150630 Â 3% -16.0% 126496 Â 4% cpuidle.C6-ATM.usage
116081 Â 0% +12.5% 130610 Â 1% time.involuntary_context_switches

testbox/testcase/testparams: lkp-sb03/nepim/300s-100%-tcp

71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
2499 Â 2% -6.1% 2347 Â 2% nepim.tcp.avg.snd_s
655257 Â 2% -6.1% 615397 Â 2% nepim.tcp.avg.kbps_out
654746 Â 2% -6.0% 615555 Â 2% nepim.tcp.avg.kbps_in
2497 Â 2% -6.0% 2348 Â 2% nepim.tcp.avg.rcv_s
-163648 Â-14% -33.4% -108977 Â-9% sched_debug.cfs_rq[24]:/.spread0
-162978 Â-14% -34.9% -106150 Â-9% sched_debug.cfs_rq[21]:/.spread0
-163786 Â-14% -33.3% -109304 Â-10% sched_debug.cfs_rq[30]:/.spread0
-161694 Â-15% -47.6% -84805 Â-32% sched_debug.cfs_rq[14]:/.spread0
-154868 Â-18% -37.2% -97180 Â-4% sched_debug.cfs_rq[12]:/.spread0
2705 Â29% -83.0% 461 Â38% sched_debug.cpu#9.ttwu_local
-145997 Â-22% -37.5% -91304 Â-11% sched_debug.cfs_rq[11]:/.spread0
-163278 Â-15% -44.7% -90304 Â-20% sched_debug.cfs_rq[15]:/.spread0
-156828 Â-16% -35.9% -100544 Â-12% sched_debug.cfs_rq[17]:/.spread0
-163188 Â-15% -33.9% -107948 Â-9% sched_debug.cfs_rq[23]:/.spread0
-161060 Â-14% -36.7% -102013 Â-8% sched_debug.cfs_rq[18]:/.spread0
-156604 Â-15% -35.2% -101557 Â-11% sched_debug.cfs_rq[25]:/.spread0
-162125 Â-13% -34.2% -106729 Â-9% sched_debug.cfs_rq[29]:/.spread0
-162791 Â-15% -34.5% -106675 Â-10% sched_debug.cfs_rq[31]:/.spread0
-163378 Â-15% -36.0% -104576 Â-10% sched_debug.cfs_rq[22]:/.spread0
-81764 Â-12% -42.8% -46748 Â-22% sched_debug.cfs_rq[1]:/.spread0
-152211 Â-18% -51.6% -73594 Â-41% sched_debug.cfs_rq[5]:/.spread0
-158111 Â-15% -31.9% -107656 Â-10% sched_debug.cfs_rq[19]:/.spread0
-161454 Â-14% -34.5% -105758 Â-10% sched_debug.cfs_rq[20]:/.spread0
-157597 Â-18% -32.7% -106088 Â-9% sched_debug.cfs_rq[26]:/.spread0
-157783 Â-10% -31.7% -107791 Â-9% sched_debug.cfs_rq[16]:/.spread0
-163864 Â-14% -34.5% -107286 Â-9% sched_debug.cfs_rq[27]:/.spread0
-160294 Â-16% -43.2% -91021 Â-21% sched_debug.cfs_rq[13]:/.spread0
-154265 Â-17% -57.9% -64949 Â-48% sched_debug.cfs_rq[4]:/.spread0
-156174 Â-13% -56.0% -68677 Â-46% sched_debug.cfs_rq[7]:/.spread0
-163039 Â-14% -35.4% -105312 Â-9% sched_debug.cfs_rq[28]:/.spread0
2648 Â17% -70.4% 784 Â37% sched_debug.cpu#12.ttwu_local
3794 Â13% -71.5% 1082 Â43% sched_debug.cpu#4.ttwu_local
4710 Â28% -83.9% 756 Â38% sched_debug.cpu#1.ttwu_local
39796 Â46% -68.5% 12541 Â17% cpuidle.C6-SNB.time
293 Â 2% +49.7% 439 Â22% sched_debug.cpu#31.ttwu_count
2434 Â41% +57.1% 3824 Â10% sched_debug.cpu#8.ttwu_local
54924.58 Â23% -64.4% 19562.82 Â36% sched_debug.cfs_rq[9]:/.exec_clock
190 Â 4% -44.6% 105 Â12% sched_debug.cpu#0.load
453881 Â15% -67.5% 147391 Â30% sched_debug.cpu#9.sched_count
10 Â36% -43.8% 6 Â23% sched_debug.cpu#18.nr_uninterruptible
209867 Â24% -57.2% 89799 Â21% sched_debug.cpu#8.ttwu_count
23 Â43% +77.5% 42 Â16% numa-numastat.node0.other_node
3549 Â 6% +67.4% 5941 Â 1% sched_debug.cpu#25.ttwu_count
1 Â 0% +166.7% 2 Â17% sched_debug.cfs_rq[18]:/.nr_spread_over
665 Â13% +145.1% 1631 Â42% sched_debug.cfs_rq[28]:/.exec_clock
588 Â36% +38.8% 816 Â 4% sched_debug.cpu#27.ttwu_count
2526 Â 3% -48.9% 1291 Â31% sched_debug.cpu#11.ttwu_local
626 Â11% -52.8% 296 Â20% sched_debug.cpu#23.sched_goidle
192 Â 4% -49.0% 98 Â12% sched_debug.cfs_rq[0]:/.load
24.72 Â 6% -18.7% 20.10 Â 6% turbostat.%pc2
1763 Â 1% -50.5% 872 Â10% sched_debug.cpu#0.curr->pid
58263 Â29% +106.5% 120293 Â32% sched_debug.cpu#4.sched_count
1176 Â 9% +63.0% 1916 Â 6% sched_debug.cpu#25.ttwu_local
3 Â27% +122.2% 6 Â46% sched_debug.cfs_rq[11]:/.runnable_load_avg
242 Â 7% -42.5% 139 Â 9% sched_debug.cpu#0.cpu_load[2]
128 Â15% -41.6% 75 Â19% sched_debug.cpu#1.cpu_load[2]
471 Â26% +133.4% 1101 Â38% sched_debug.cfs_rq[21]:/.exec_clock
192 Â 3% -49.7% 97 Â12% sched_debug.cfs_rq[0]:/.runnable_load_avg
263 Â 8% -41.3% 154 Â 8% sched_debug.cpu#0.cpu_load[3]
22 Â14% +64.7% 37 Â15% sched_debug.cfs_rq[11]:/.tg_runnable_contrib
451167 Â15% -67.8% 145407 Â30% sched_debug.cpu#9.nr_switches
1076 Â13% +64.1% 1766 Â14% sched_debug.cfs_rq[11]:/.avg->runnable_avg_sum
923811 Â 3% -8.6% 844463 Â 7% sched_debug.cpu#2.avg_idle
348 Â13% -33.5% 231 Â29% sched_debug.cfs_rq[18]:/.tg_load_contrib
141 Â16% -43.5% 79 Â18% sched_debug.cpu#1.cpu_load[3]
1350 Â21% +192.3% 3946 Â30% sched_debug.cpu#21.ttwu_count
1334 Â10% -52.4% 635 Â17% sched_debug.cpu#23.nr_switches
346 Â20% -65.6% 119 Â48% sched_debug.cfs_rq[9]:/.tg_runnable_contrib
15865 Â20% -65.6% 5457 Â47% sched_debug.cfs_rq[9]:/.avg->runnable_avg_sum
216 Â 6% -43.9% 121 Â 9% sched_debug.cpu#0.cpu_load[1]
344 Â15% -33.8% 228 Â32% sched_debug.cfs_rq[18]:/.blocked_load_avg
0.94 Â12% -46.6% 0.50 Â29% turbostat.%c3
53828 Â17% +138.4% 128330 Â38% sched_debug.cpu#0.sched_count
276 Â 8% -39.9% 166 Â 6% sched_debug.cpu#0.cpu_load[4]
115 Â14% -39.2% 70 Â23% sched_debug.cpu#1.cpu_load[1]
150 Â14% -44.8% 83 Â17% sched_debug.cpu#1.cpu_load[4]
117 Â21% -62.7% 43 Â38% sched_debug.cpu#9.cpu_load[2]
104 Â18% -60.7% 41 Â32% sched_debug.cpu#9.cpu_load[1]
224950 Â14% -67.8% 72354 Â30% sched_debug.cpu#9.sched_goidle
95 Â17% -56.1% 42 Â24% sched_debug.cfs_rq[9]:/.load
3.634e+08 Â 3% -8.0% 3.343e+08 Â 5% proc-vmstat.pgalloc_normal
38481 Â14% -27.7% 27816 Â 0% sched_debug.cfs_rq[1]:/.exec_clock
3095 Â 9% +114.8% 6648 Â 6% sched_debug.cpu#17.ttwu_count
101 Â28% -62.2% 38 Â29% sched_debug.cpu#9.load
116290 Â21% -63.1% 42940 Â35% sched_debug.cfs_rq[9]:/.min_vruntime
318 Â 6% -53.0% 149 Â 7% sched_debug.cfs_rq[1]:/.tg_runnable_contrib
195 Â 3% -47.7% 102 Â10% sched_debug.cpu#0.cpu_load[0]
14526 Â 7% -52.8% 6853 Â 7% sched_debug.cfs_rq[1]:/.avg->runnable_avg_sum
195 Â 4% -50.0% 97 Â13% sched_debug.cfs_rq[0]:/.tg_load_contrib
4452 Â16% +104.1% 9084 Â15% sched_debug.cpu#28.ttwu_count
109 Â11% -39.4% 66 Â31% sched_debug.cpu#1.cpu_load[0]
5937 Â 8% +45.3% 8626 Â 7% sched_debug.cpu#17.sched_count
2727 Â29% -39.7% 1644 Â15% sched_debug.cfs_rq[30]:/.min_vruntime
93 Â14% +47.3% 137 Â12% sched_debug.cpu#27.ttwu_local
110 Â14% -53.3% 51 Â15% sched_debug.cfs_rq[1]:/.tg_load_contrib
509 Â20% +57.1% 799 Â18% sched_debug.cpu#28.ttwu_local
180930 Â17% -48.4% 93315 Â 8% sched_debug.cpu#1.sched_goidle
529 Â 7% -47.8% 276 Â12% sched_debug.cfs_rq[0]:/.tg_runnable_contrib
106 Â17% -58.0% 44 Â10% sched_debug.cfs_rq[1]:/.runnable_load_avg
24335 Â 7% -47.9% 12681 Â12% sched_debug.cfs_rq[0]:/.avg->runnable_avg_sum
116 Â16% -59.6% 47 Â10% sched_debug.cfs_rq[1]:/.load
1050 Â28% -61.4% 405 Â24% sched_debug.cpu#9.curr->pid
81635 Â 5% -7.0% 75959 Â 4% sched_debug.cpu#19.nr_load_updates
2611 Â 7% -29.7% 1834 Â19% sched_debug.cpu#3.ttwu_local
2454 Â10% +20.0% 2944 Â 1% sched_debug.cpu#17.sched_goidle
201182 Â17% -48.1% 104439 Â10% sched_debug.cpu#0.ttwu_count
997 Â22% +34.2% 1338 Â20% proc-vmstat.numa_hint_faults_local
362192 Â17% -48.1% 187835 Â 8% sched_debug.cpu#1.nr_switches
1093 Â 8% +72.8% 1889 Â 7% sched_debug.cpu#17.ttwu_local
105 Â 9% -51.9% 50 Â 9% sched_debug.cpu#1.load
5926 Â 8% +35.7% 8040 Â 2% sched_debug.cpu#17.nr_switches
566873 Â13% +29.7% 735000 Â 8% sched_debug.cpu#1.avg_idle
112 Â21% -62.5% 42 Â26% sched_debug.cfs_rq[9]:/.tg_load_contrib
3534 Â15% +35.8% 4798 Â 9% sched_debug.cfs_rq[21]:/.min_vruntime
1085 Â 8% -55.1% 487 Â 7% sched_debug.cpu#1.curr->pid
105312 Â 9% -17.5% 86874 Â 5% sched_debug.cpu#9.nr_load_updates
3473 Â11% +62.3% 5636 Â14% sched_debug.cfs_rq[28]:/.min_vruntime
822 Â 8% +74.0% 1430 Â19% sched_debug.cpu#19.ttwu_count
380149 Â18% -47.1% 201060 Â 7% sched_debug.cpu#1.sched_count
134 Â23% -64.9% 47 Â43% sched_debug.cpu#9.cpu_load[3]
97430 Â 3% -10.0% 87706 Â 6% sched_debug.cpu#1.nr_load_updates
84743 Â16% -24.2% 64197 Â 1% sched_debug.cfs_rq[1]:/.min_vruntime
65 Â 9% +62.9% 107 Â30% sched_debug.cpu#31.ttwu_local
93974 Â 9% -13.1% 81632 Â 7% sched_debug.cpu#10.nr_load_updates
970168 Â 4% -4.7% 925055 Â 5% sched_debug.cpu#6.avg_idle
99271909 Â11% +46.2% 1.451e+08 Â 5% cpuidle.C1-SNB.time
101 Â15% -58.7% 42 Â26% sched_debug.cpu#9.cpu_load[0]
82245 Â14% -37.9% 51114 Â14% sched_debug.cpu#0.nr_load_updates
34310 Â 3% +24.7% 42799 Â 8% softirqs.RCU
149 Â24% -65.8% 51 Â45% sched_debug.cpu#9.cpu_load[4]
79674 Â14% -41.6% 46569 Â14% sched_debug.cfs_rq[0]:/.exec_clock
166507 Â14% -33.4% 110946 Â 9% sched_debug.cfs_rq[0]:/.min_vruntime
88 Â16% -56.4% 38 Â25% sched_debug.cfs_rq[9]:/.runnable_load_avg
87747 Â 7% -8.1% 80678 Â 6% sched_debug.cpu#11.nr_load_updates
1185 Â32% -41.6% 693 Â 6% sched_debug.cpu#18.sched_goidle
85483 Â 6% -6.9% 79608 Â 4% sched_debug.cpu#12.nr_load_updates
731 Â 0% +25.5% 917 Â 9% slabinfo.blkdev_requests.active_objs
731 Â 0% +25.5% 917 Â 9% slabinfo.blkdev_requests.num_objs
748 Â18% +26.9% 950 Â 7% sched_debug.cpu#26.sched_goidle
58322 Â13% +22.4% 71395 Â 6% sched_debug.cpu#2.nr_load_updates
300852 Â 3% -22.5% 233044 Â 2% cpuidle.C7-SNB.usage
571714 Â 5% +24.2% 710013 Â 9% sched_debug.cpu#9.avg_idle
10.29 Â 5% +9.3% 11.25 Â 2% turbostat.%c1
742 Â 2% +22.6% 910 Â 7% slabinfo.xfs_buf.num_objs
742 Â 2% +22.6% 910 Â 7% slabinfo.xfs_buf.active_objs
383 Â 5% +14.9% 441 Â 5% numa-vmstat.node1.nr_page_table_pages
1533 Â 4% +15.4% 1769 Â 6% numa-meminfo.node1.PageTables
524 Â 3% -14.4% 448 Â 9% numa-vmstat.node0.nr_page_table_pages
2081 Â 3% -13.8% 1794 Â 9% numa-meminfo.node0.PageTables
12203 Â 4% +6.3% 12970 Â 2% slabinfo.kmalloc-256.num_objs
546 Â24% +2329.3% 13280 Â 8% time.involuntary_context_switches
1827 Â 5% +23.9% 2263 Â14% time.minor_page_faults
3166 Â 0% -5.8% 2984 Â 0% vmstat.system.in

testbox/testcase/testparams: xps2/pigz/100%-128K

71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
3 Â27% +500.0% 18 Â16% sched_debug.cfs_rq[7]:/.nr_spread_over
10 Â 4% -71.0% 3 Â27% sched_debug.cfs_rq[4]:/.nr_spread_over
575 Â44% +75.4% 1009 Â14% sched_debug.cfs_rq[5]:/.tg_load_contrib
2 Â 0% +116.7% 4 Â10% sched_debug.cfs_rq[0]:/.nr_spread_over
116 Â 5% +19.4% 139 Â 6% sched_debug.cfs_rq[7]:/.load
8 Â30% +50.0% 13 Â16% sched_debug.cfs_rq[5]:/.nr_spread_over
114 Â 2% +11.4% 127 Â 3% sched_debug.cfs_rq[7]:/.runnable_load_avg
10 Â 4% +18.8% 12 Â 3% cpuidle.POLL.usage
115 Â 3% +24.1% 142 Â 4% sched_debug.cpu#7.load
111 Â 4% +9.9% 122 Â 1% sched_debug.cpu#5.cpu_load[1]
109 Â 4% +11.6% 122 Â 1% sched_debug.cpu#5.cpu_load[2]
303 Â14% -31.0% 209 Â26% sched_debug.cfs_rq[1]:/.blocked_load_avg
109 Â 3% +10.7% 120 Â 1% sched_debug.cpu#5.cpu_load[3]
11242 Â39% -46.6% 6002 Â27% sched_debug.cpu#2.sched_goidle
109 Â 3% +8.6% 118 Â 1% sched_debug.cpu#5.cpu_load[4]
5622 Â24% +78.6% 10041 Â16% sched_debug.cpu#0.sched_goidle
113 Â 1% +12.6% 128 Â 6% sched_debug.cpu#7.cpu_load[1]
2051 Â 2% -21.3% 1614 Â16% slabinfo.kmalloc-256.active_objs
1428 Â11% -20.1% 1140 Â 4% cpuidle.C6-NHM.usage
966 Â 3% +23.2% 1190 Â 4% slabinfo.kmalloc-96.active_objs
966 Â 3% +23.2% 1190 Â 4% slabinfo.kmalloc-96.num_objs
112 Â 0% +16.0% 130 Â 5% sched_debug.cpu#7.cpu_load[0]
2244 Â 3% -12.3% 1969 Â 6% slabinfo.kmalloc-256.num_objs

testbox/testcase/testparams: lkp-a05/iperf/300s-tcp

71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
8639 Â 6% -55.0% 3890 Â11% cpuidle.POLL.time
324 Â12% +23.4% 400 Â 1% sched_debug.cfs_rq[3]:/.blocked_load_avg
351 Â 5% -34.7% 229 Â 6% cpuidle.POLL.usage
538 Â11% +21.3% 653 Â 4% sched_debug.cfs_rq[3]:/.tg_load_contrib
188 Â 8% +17.2% 221 Â 8% sched_debug.cpu#3.cpu_load[3]
2271 Â 5% +15.9% 2632 Â 7% sched_debug.cpu#1.curr->pid
6143 Â 6% -20.8% 4864 Â 5% cpuidle.C2-ATM.usage
168 Â 9% +23.0% 207 Â 7% sched_debug.cpu#3.cpu_load[4]
4822666 Â 6% -20.7% 3824022 Â 5% cpuidle.C4-ATM.time
2978 Â 1% -25.4% 2221 Â 4% cpuidle.C4-ATM.usage
135546 Â 4% -7.4% 125521 Â 1% sched_debug.cpu#2.nr_load_updates
44960 Â 5% -24.3% 34026 Â 2% softirqs.SCHED
59568 Â 1% -22.3% 46262 Â 4% cpuidle.C6-ATM.usage
31677 Â 0% +42.5% 45147 Â 1% time.involuntary_context_switches

testbox/testcase/testparams: lkp-sb03/nepim/300s-100%-tcp6

71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
-175469 Â-9% -45.6% -95473 Â-2% sched_debug.cfs_rq[24]:/.spread0
-171833 Â-9% -45.1% -94396 Â-3% sched_debug.cfs_rq[21]:/.spread0
-175185 Â-9% -46.3% -94082 Â-4% sched_debug.cfs_rq[30]:/.spread0
9641 Â29% +510.9% 58903 Â13% sched_debug.cfs_rq[12]:/.min_vruntime
2277 Â28% +1040.1% 25970 Â15% sched_debug.cfs_rq[12]:/.exec_clock
-167372 Â-10% -60.2% -66538 Â-5% sched_debug.cfs_rq[14]:/.spread0
-167598 Â-11% -75.7% -40731 Â-14% sched_debug.cfs_rq[12]:/.spread0
7439 Â14% +635.8% 54741 Â21% sched_debug.cfs_rq[15]:/.min_vruntime
-169801 Â-9% -73.6% -44894 Â-31% sched_debug.cfs_rq[15]:/.spread0
-168310 Â-9% -46.6% -89926 Â-2% sched_debug.cfs_rq[17]:/.spread0
-174844 Â-9% -44.5% -96979 Â-3% sched_debug.cfs_rq[23]:/.spread0
-173449 Â-9% -45.2% -95119 Â-2% sched_debug.cfs_rq[18]:/.spread0
-167940 Â-9% -47.1% -88900 Â-2% sched_debug.cfs_rq[25]:/.spread0
-174120 Â-9% -44.6% -96498 Â-2% sched_debug.cfs_rq[29]:/.spread0
-173030 Â-8% -44.8% -95548 Â-2% sched_debug.cfs_rq[31]:/.spread0
90.71 Â47% +1113.5% 1100.77 Â 7% sched_debug.cfs_rq[24]:/.exec_clock
-172929 Â-9% -44.0% -96907 Â-3% sched_debug.cfs_rq[22]:/.spread0
-79197 Â-12% -25.9% -58693 Â-10% sched_debug.cfs_rq[1]:/.spread0
-152042 Â-8% -38.2% -93998 Â-3% sched_debug.cfs_rq[5]:/.spread0
-160250 Â-10% -50.3% -79621 Â-4% sched_debug.cfs_rq[3]:/.spread0
-173703 Â-9% -44.5% -96326 Â-2% sched_debug.cfs_rq[19]:/.spread0
-161707 Â-6% -46.3% -86771 Â-1% sched_debug.cfs_rq[2]:/.spread0
-170621 Â-10% -47.6% -89343 Â-3% sched_debug.cfs_rq[20]:/.spread0
-167504 Â-10% -42.7% -95923 Â-2% sched_debug.cfs_rq[6]:/.spread0
-171102 Â-8% -51.3% -83293 Â-13% sched_debug.cfs_rq[26]:/.spread0
-172487 Â-10% -44.0% -96550 Â-1% sched_debug.cfs_rq[16]:/.spread0
-172553 Â-9% -46.1% -92991 Â-2% sched_debug.cfs_rq[27]:/.spread0
-162867 Â-7% -74.9% -40947 Â-16% sched_debug.cfs_rq[13]:/.spread0
3582 Â24% -92.2% 277 Â45% sched_debug.cpu#5.ttwu_local
1 Â35% +825.0% 12 Â 7% sched_debug.cpu#26.cpu_load[1]
-166730 Â-11% -45.7% -90466 Â-1% sched_debug.cfs_rq[4]:/.spread0
-170615 Â-9% -45.1% -93612 Â-1% sched_debug.cfs_rq[7]:/.spread0
-170862 Â-10% -45.6% -92877 Â-3% sched_debug.cfs_rq[28]:/.spread0
2409 Â 7% -93.0% 169 Â24% sched_debug.cpu#7.ttwu_local
2923 Â29% -66.4% 983 Â37% sched_debug.cpu#12.ttwu_local
-6 Â-18% -70.0% -2 Â-40% sched_debug.cpu#9.nr_uninterruptible
3067 Â38% -68.0% 982 Â30% sched_debug.cpu#14.ttwu_local
2931 Â 7% -81.3% 547 Â40% sched_debug.cpu#4.ttwu_local
4447 Â15% -37.3% 2790 Â19% sched_debug.cpu#15.ttwu_local
-77693 Â-33% -68.9% -24161 Â-39% sched_debug.cfs_rq[9]:/.spread0
-154541 Â-17% -73.6% -40812 Â-16% sched_debug.cfs_rq[10]:/.spread0
2 Â40% +7216.7% 146 Â13% sched_debug.cfs_rq[13]:/.tg_runnable_contrib
38 Â39% -55.3% 17 Â29% sched_debug.cfs_rq[3]:/.tg_runnable_contrib
1771 Â38% -53.8% 818 Â26% sched_debug.cfs_rq[3]:/.avg->runnable_avg_sum
5503 Â14% +988.7% 59913 Â43% sched_debug.cpu#15.ttwu_count
30 Â12% -31.9% 20 Â18% sched_debug.cpu#25.nr_uninterruptible
15 Â 6% +42.6% 22 Â16% sched_debug.cpu#17.nr_uninterruptible
0 Â 0% +Inf% 62 Â11% sched_debug.cpu#13.cpu_load[4]
0 Â 0% +Inf% 56 Â10% sched_debug.cpu#13.cpu_load[3]
1 Â 0% +133.3% 2 Â20% sched_debug.cfs_rq[20]:/.nr_spread_over
7389 Â12% +26.6% 9356 Â 7% sched_debug.cpu#25.nr_switches
7939 Â23% -87.0% 1036 Â11% sched_debug.cpu#5.ttwu_count
3883 Â19% -80.8% 745 Â11% sched_debug.cpu#7.ttwu_count
423 Â 6% +151.3% 1063 Â42% sched_debug.cpu#31.ttwu_count
131 Â35% +5024.7% 6713 Â13% sched_debug.cfs_rq[13]:/.avg->runnable_avg_sum
5 Â 8% +150.0% 13 Â15% sched_debug.cpu#26.cpu_load[2]
6749 Â 7% -94.9% 342 Â 8% sched_debug.cfs_rq[5]:/.exec_clock
46851.02 Â10% -25.0% 35154.20 Â15% sched_debug.cfs_rq[9]:/.exec_clock
9868 Â24% +235.4% 33097 Â 9% sched_debug.cfs_rq[14]:/.min_vruntime
14373 Â34% +308.3% 58688 Â11% sched_debug.cfs_rq[13]:/.min_vruntime
3 Â12% +318.2% 15 Â11% sched_debug.cpu#26.cpu_load[4]
7007 Â23% +296.6% 27792 Â33% sched_debug.cfs_rq[11]:/.exec_clock
33105 Â34% +116.4% 71634 Â15% sched_debug.cpu#14.sched_count
5012 Â23% -65.6% 1725 Â13% sched_debug.cpu#4.ttwu_count
21902 Â23% +520.5% 135902 Â48% sched_debug.cpu#8.sched_count
1941 Â32% +89.7% 3683 Â36% sched_debug.cpu#21.sched_count
28 Â41% -77.9% 6 Â14% sched_debug.cfs_rq[2]:/.tg_runnable_contrib
0 Â 0% +Inf% 76 Â31% sched_debug.cpu#11.cpu_load[4]
27194 Â44% +219.6% 86905 Â 7% sched_debug.cpu#15.nr_switches
60 Â 8% +229.7% 200 Â31% sched_debug.cpu#30.ttwu_local
3322 Â 7% -27.6% 2404 Â30% sched_debug.cpu#28.sched_count
174 Â16% -53.2% 81 Â18% sched_debug.cpu#0.load
497648 Â 8% -49.7% 250454 Â10% sched_debug.cpu#9.sched_count
3023 Â15% -84.8% 459 Â19% sched_debug.cpu#2.ttwu_local
5 Â 8% +158.8% 14 Â14% sched_debug.cpu#26.cpu_load[3]
13564 Â44% +210.7% 42148 Â 8% sched_debug.cpu#15.sched_goidle
1158 Â13% -53.9% 534 Â11% sched_debug.cpu#29.ttwu_count
278319 Â11% -55.5% 123957 Â 8% sched_debug.cpu#8.ttwu_count
70 Â33% +74.8% 122 Â 7% sched_debug.cpu#24.ttwu_local
3732 Â 5% +72.0% 6419 Â 7% sched_debug.cpu#25.ttwu_count
760 Â 5% +134.8% 1784 Â43% sched_debug.cpu#27.nr_switches
25 Â19% -100.0% 0 Â 0% sched_debug.cfs_rq[9]:/.blocked_load_avg
1546 Â17% -28.4% 1107 Â30% sched_debug.cpu#29.sched_count
138 Â 3% -60.5% 54 Â15% sched_debug.cpu#28.cpu_load[2]
94 Â 2% -53.9% 43 Â 9% sched_debug.cpu#28.cpu_load[3]
174 Â16% -53.2% 81 Â18% sched_debug.cfs_rq[0]:/.load
1652 Â12% -52.8% 779 Â20% sched_debug.cpu#0.curr->pid
1315 Â11% +52.1% 2000 Â 3% sched_debug.cpu#25.ttwu_local
164 Â32% +2186.6% 3757 Â13% sched_debug.cpu#24.ttwu_count
35074 Â16% +169.1% 94384 Â 8% sched_debug.cpu#15.sched_count
6458 Â41% -66.4% 2172 Â33% sched_debug.cpu#1.ttwu_count
3 Â12% +1509.1% 59 Â36% sched_debug.cfs_rq[11]:/.runnable_load_avg
84863 Â 5% -44.2% 47316 Â 6% sched_debug.cfs_rq[8]:/.exec_clock
11542079 Â 8% -63.6% 4201723 Â 7% numa-vmstat.node0.numa_local
55 Â 2% -37.3% 34 Â 3% sched_debug.cpu#28.cpu_load[4]
11597631 Â 8% -63.3% 4257210 Â 7% numa-vmstat.node0.numa_hit
20087 Â26% +532.6% 127072 Â42% sched_debug.cpu#8.nr_switches
6826 Â22% -73.1% 1837 Â11% sched_debug.cpu#2.ttwu_count
14685 Â23% -57.7% 6217 Â 9% sched_debug.cpu#6.nr_switches
181370 Â 5% -44.3% 100933 Â10% sched_debug.cfs_rq[8]:/.min_vruntime
166 Â 4% -61.2% 64 Â15% sched_debug.cpu#28.cpu_load[1]
246 Â15% -43.2% 140 Â13% sched_debug.cpu#0.cpu_load[2]
1333 Â40% -76.4% 314 Â12% sched_debug.cfs_rq[2]:/.avg->runnable_avg_sum
171 Â14% -52.8% 80 Â19% sched_debug.cfs_rq[0]:/.runnable_load_avg
3309 Â 7% -27.7% 2392 Â30% sched_debug.cpu#28.nr_switches
271 Â13% -39.1% 165 Â12% sched_debug.cpu#0.cpu_load[3]
136 Â12% +138.0% 325 Â28% sched_debug.cfs_rq[20]:/.blocked_load_avg
9301 Â 8% +15.4% 10734 Â12% sched_debug.cfs_rq[25]:/.min_vruntime
0 Â 0% +Inf% 71 Â31% sched_debug.cpu#11.cpu_load[3]
26 Â 8% +556.4% 170 Â30% sched_debug.cfs_rq[11]:/.tg_runnable_contrib
489684 Â 7% -50.7% 241248 Â13% sched_debug.cpu#9.nr_switches
1243 Â 8% +532.9% 7868 Â30% sched_debug.cfs_rq[11]:/.avg->runnable_avg_sum
8102 Â17% -63.3% 2974 Â 8% sched_debug.cpu#5.sched_goidle
168 Â 5% -60.1% 67 Â12% sched_debug.cpu#28.cpu_load[0]
8079 Â12% +912.4% 81796 Â 9% sched_debug.cpu#12.sched_goidle
11 Â 7% +248.5% 38 Â 7% sched_debug.cfs_rq[26]:/.tg_runnable_contrib
67 Â15% +93.0% 129 Â30% sched_debug.cpu#22.ttwu_local
525 Â 7% +244.4% 1809 Â 7% sched_debug.cfs_rq[26]:/.avg->runnable_avg_sum
217 Â17% -48.2% 112 Â15% sched_debug.cpu#0.cpu_load[1]
7318 Â23% -58.1% 3069 Â 9% sched_debug.cpu#6.sched_goidle
20346 Â41% +576.0% 137537 Â13% sched_debug.cpu#13.nr_switches
9915 Â26% +506.6% 60151 Â46% sched_debug.cpu#8.sched_goidle
7317 Â23% +925.4% 75030 Â49% sched_debug.cpu#11.ttwu_count
7 Â23% +228.6% 23 Â32% sched_debug.cfs_rq[21]:/.tg_runnable_contrib
3075 Â13% +16.2% 3572 Â10% sched_debug.cpu#25.sched_goidle
239 Â 8% -71.4% 68 Â41% sched_debug.cfs_rq[29]:/.blocked_load_avg
17742699 Â 8% -62.8% 6609108 Â 4% proc-vmstat.pgalloc_dma32
240 Â 9% -71.6% 68 Â41% sched_debug.cfs_rq[29]:/.tg_load_contrib
185 Â 3% -51.7% 89 Â26% sched_debug.cfs_rq[8]:/.load
289 Â13% -35.1% 187 Â13% sched_debug.cpu#0.cpu_load[4]
185 Â 3% -51.7% 89 Â17% sched_debug.cpu#8.load
16302 Â17% -63.0% 6038 Â 8% sched_debug.cpu#5.nr_switches
108 Â18% -27.9% 78 Â25% sched_debug.cpu#9.cpu_load[2]
1230 Â21% -45.9% 665 Â16% sched_debug.cpu#31.sched_count
10134 Â41% +569.8% 67883 Â13% sched_debug.cpu#13.sched_goidle
100 Â17% -31.6% 68 Â24% sched_debug.cpu#9.cpu_load[1]
244283 Â 7% -51.0% 119587 Â13% sched_debug.cpu#9.sched_goidle
3263 Â 7% -58.9% 1342 Â24% sched_debug.cpu#28.curr->pid
168 Â 5% -60.1% 67 Â12% sched_debug.cfs_rq[28]:/.runnable_load_avg
168 Â 5% -59.6% 68 Â14% sched_debug.cfs_rq[28]:/.load
1378 Â 7% -32.4% 931 Â31% sched_debug.cpu#28.sched_goidle
45204 Â 7% -62.7% 16868 Â13% sched_debug.cfs_rq[1]:/.exec_clock
3177 Â 6% +77.9% 5650 Â 5% sched_debug.cpu#17.ttwu_count
168 Â 5% -59.6% 68 Â14% sched_debug.cpu#28.load
1592 Â 4% +44.3% 2298 Â 4% sched_debug.cpu#20.sched_goidle
1217 Â21% -46.4% 653 Â16% sched_debug.cpu#31.nr_switches
282 Â 8% +68.1% 474 Â 9% sched_debug.cfs_rq[20]:/.tg_load_contrib
99546 Â10% -24.2% 75473 Â12% sched_debug.cfs_rq[9]:/.min_vruntime
9735 Â43% -61.9% 3711 Â13% sched_debug.cfs_rq[6]:/.min_vruntime
355 Â22% +209.9% 1101 Â30% sched_debug.cfs_rq[21]:/.avg->runnable_avg_sum
260 Â26% +41.7% 368 Â22% sched_debug.cfs_rq[21]:/.tg_load_contrib
56733 Â44% +84.8% 104825 Â32% sched_debug.cpu#11.sched_count
193 Â19% -50.0% 96 Â19% sched_debug.cpu#0.cpu_load[0]
29282 Â17% -28.6% 20906 Â 8% sched_debug.cpu#0.nr_switches
1022 Â13% -67.3% 333 Â10% sched_debug.cfs_rq[6]:/.exec_clock
177 Â14% -49.9% 88 Â12% sched_debug.cfs_rq[0]:/.tg_load_contrib
567 Â23% -49.5% 286 Â19% sched_debug.cpu#31.sched_goidle
6500 Â13% -19.3% 5245 Â 3% sched_debug.cpu#3.ttwu_count
16269 Â12% +916.1% 165305 Â 8% sched_debug.cpu#12.nr_switches
1772 Â14% +134.8% 4162 Â 3% sched_debug.cfs_rq[24]:/.min_vruntime
5866 Â 4% +28.6% 7544 Â 5% sched_debug.cpu#17.sched_count
79126 Â 2% +6.8% 84535 Â 3% sched_debug.cpu#24.nr_load_updates
2058 Â18% +169.8% 5553 Â42% sched_debug.cfs_rq[30]:/.min_vruntime
127 Â 7% -50.3% 63 Â29% sched_debug.cfs_rq[1]:/.tg_load_contrib
536 Â 8% -32.8% 360 Â21% sched_debug.cpu#28.ttwu_local
11834 Â25% -58.5% 4905 Â 7% sched_debug.cpu#7.nr_switches
32497 Â38% -40.8% 19232 Â30% sched_debug.cpu#2.nr_switches
16058 Â39% -40.9% 9484 Â31% sched_debug.cpu#2.sched_goidle
5899 Â26% -58.9% 2424 Â 7% sched_debug.cpu#7.sched_goidle
230137 Â 3% -66.7% 76736 Â 9% sched_debug.cpu#1.sched_goidle
507 Â12% -49.9% 254 Â18% sched_debug.cfs_rq[0]:/.tg_runnable_contrib
3627 Â10% -45.4% 1980 Â 0% meminfo.AnonHugePages
23190 Â12% -49.7% 11664 Â18% sched_debug.cfs_rq[0]:/.avg->runnable_avg_sum
2954 Â 8% -47.8% 1543 Â11% sched_debug.cpu#3.ttwu_local
79091 Â 2% +7.2% 84785 Â 2% sched_debug.cpu#27.nr_load_updates
2474 Â38% -44.2% 1381 Â11% proc-vmstat.numa_hint_faults
244948 Â 7% -61.7% 93754 Â 4% sched_debug.cpu#0.ttwu_count
24408 Â 2% +154.6% 62149 Â31% sched_debug.cfs_rq[11]:/.min_vruntime
2521 Â37% -43.6% 1422 Â10% proc-vmstat.numa_pte_updates
186 Â 4% -49.5% 94 Â23% sched_debug.cfs_rq[8]:/.tg_load_contrib
535 Â 6% +76.0% 941 Â 0% sched_debug.cpu#20.ttwu_local
387 Â 8% +70.8% 661 Â31% sched_debug.cpu#22.sched_goidle
460655 Â 3% -66.6% 153819 Â 9% sched_debug.cpu#1.nr_switches
1533 Â18% -38.6% 940 Â19% sched_debug.cpu#29.nr_switches
1055 Â15% +68.2% 1775 Â 7% sched_debug.cpu#17.ttwu_local
5854 Â 4% +28.6% 7532 Â 5% sched_debug.cpu#17.nr_switches
518944 Â 5% +46.1% 758059 Â18% sched_debug.cpu#1.avg_idle
21065 Â 9% +10.8% 23341 Â10% numa-meminfo.node0.Active(anon)
186 Â 4% -51.3% 90 Â23% sched_debug.cpu#8.cpu_load[0]
240 Â 3% -53.2% 112 Â28% sched_debug.cpu#8.cpu_load[3]
215 Â 3% -52.5% 102 Â26% sched_debug.cpu#8.cpu_load[2]
450 Â26% -44.7% 249 Â32% sched_debug.cfs_rq[28]:/.tg_load_contrib
3760 Â 3% +52.8% 5744 Â 1% sched_debug.cpu#20.sched_count
12799 Â19% -45.6% 6960 Â 7% sched_debug.cpu#0.sched_goidle
23771309 Â 8% -62.1% 9017612 Â 3% numa-numastat.node0.local_node
23771332 Â 8% -62.1% 9017635 Â 3% numa-numastat.node0.numa_hit
531 Â 0% -49.2% 270 Â25% sched_debug.cfs_rq[8]:/.tg_runnable_contrib
193 Â 3% -51.5% 94 Â24% sched_debug.cpu#8.cpu_load[1]
24361 Â 0% -49.2% 12377 Â25% sched_debug.cfs_rq[8]:/.avg->runnable_avg_sum
2525 Â 8% -19.6% 2030 Â 4% sched_debug.cpu#18.sched_count
358 Â 3% -45.5% 195 Â32% sched_debug.cfs_rq[26]:/.blocked_load_avg
358 Â 3% -42.8% 205 Â31% sched_debug.cfs_rq[26]:/.tg_load_contrib
5268 Â 9% +10.8% 5836 Â10% numa-vmstat.node0.nr_active_anon
257 Â 2% -52.8% 121 Â28% sched_debug.cpu#8.cpu_load[4]
836 Â 8% +75.8% 1471 Â28% sched_debug.cpu#22.nr_switches
467556 Â 3% -63.2% 172089 Â15% sched_debug.cpu#1.sched_count
100306 Â 2% -8.2% 92129 Â 1% sched_debug.cpu#1.nr_load_updates
13406964 Â 9% +54.8% 20753143 Â 3% numa-vmstat.node1.numa_local
13421601 Â 9% +54.7% 20768222 Â 3% numa-vmstat.node1.numa_hit
201 Â 8% +50.4% 303 Â 5% sched_debug.cpu#18.ttwu_local
98041 Â 7% -58.2% 40940 Â17% sched_debug.cfs_rq[1]:/.min_vruntime
968087 Â 3% -13.4% 838650 Â 3% sched_debug.cpu#12.avg_idle
85296 Â 3% +13.9% 97133 Â 1% sched_debug.cpu#10.nr_load_updates
1822 Â 7% -52.6% 863 Â17% sched_debug.cpu#8.curr->pid
84101 Â 1% +16.2% 97746 Â 3% sched_debug.cpu#15.nr_load_updates
100 Â15% -34.9% 65 Â22% sched_debug.cpu#9.cpu_load[0]
87101 Â10% -44.4% 48461 Â 3% sched_debug.cpu#0.nr_load_updates
82841 Â 2% +10.4% 91426 Â 1% sched_debug.cpu#14.nr_load_updates
186 Â 4% -53.2% 87 Â26% sched_debug.cfs_rq[8]:/.runnable_load_avg
83538 Â 2% +16.0% 96885 Â 3% sched_debug.cpu#13.nr_load_updates
84391 Â 8% -48.3% 43669 Â 4% sched_debug.cfs_rq[0]:/.exec_clock
987037 Â 1% -14.2% 846398 Â 8% sched_debug.cpu#13.avg_idle
177237 Â 9% -43.8% 99634 Â 2% sched_debug.cfs_rq[0]:/.min_vruntime
86370 Â 4% +13.7% 98170 Â 6% sched_debug.cpu#11.nr_load_updates
1085 Â12% -21.0% 856 Â 4% sched_debug.cpu#18.sched_goidle
82842 Â 1% +16.7% 96690 Â 0% sched_debug.cpu#12.nr_load_updates
128898 Â 0% -15.0% 109598 Â 2% sched_debug.cpu#8.nr_load_updates
678775 Â 1% +23.3% 836906 Â 7% sched_debug.cpu#11.avg_idle
997365 Â 0% -16.3% 834655 Â 6% sched_debug.cpu#8.avg_idle
25575328 Â10% +56.4% 39987543 Â 2% numa-numastat.node1.numa_hit
25575276 Â10% +56.4% 39987507 Â 2% numa-numastat.node1.local_node
679 Â19% -39.1% 414 Â21% sched_debug.cpu#29.sched_goidle
3791 Â10% +19.1% 4515 Â 2% sched_debug.cfs_rq[18]:/.min_vruntime
1263 Â 4% +13.0% 1427 Â 5% sched_debug.cpu#18.ttwu_count
53498 Â 8% +42.0% 75956 Â 4% sched_debug.cpu#2.nr_load_updates
939453 Â 4% -12.6% 821265 Â 2% sched_debug.cpu#10.avg_idle
295153 Â 2% -15.6% 249216 Â 1% cpuidle.C7-SNB.usage
583842 Â12% +32.4% 773272 Â 2% sched_debug.cpu#9.avg_idle
3748 Â 3% +49.5% 5605 Â 4% sched_debug.cpu#20.nr_switches
2420 Â11% -16.7% 2017 Â 4% sched_debug.cpu#18.nr_switches
11077 Â 8% +19.1% 13188 Â 3% slabinfo.kmalloc-256.active_objs
507 Â12% +25.6% 637 Â10% slabinfo.buffer_head.active_objs
507 Â12% +25.6% 637 Â10% slabinfo.buffer_head.num_objs
16697 Â 9% -11.4% 14793 Â10% slabinfo.vm_area_struct.num_objs
16525 Â10% -11.8% 14577 Â 9% slabinfo.vm_area_struct.active_objs
11616 Â 9% +17.4% 13637 Â 6% slabinfo.kmalloc-256.num_objs
1410 Â 6% +10.8% 1563 Â 3% slabinfo.sock_inode_cache.active_objs
1410 Â 6% +10.8% 1563 Â 3% slabinfo.sock_inode_cache.num_objs
863 Â45% +1260.1% 11737 Â12% time.involuntary_context_switches
3167 Â 0% -4.9% 3011 Â 0% vmstat.system.in

testbox/testcase/testparams: lkp-a05/iperf/300s-udp

71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
21 Â 7% -28.1% 15 Â26% sched_debug.cpu#2.cpu_load[4]
3045 Â11% -21.1% 2402 Â14% cpuidle.POLL.time
41 Â14% -41.6% 24 Â46% sched_debug.cpu#2.cpu_load[2]
219 Â19% -35.3% 142 Â23% sched_debug.cfs_rq[2]:/.blocked_load_avg
31 Â11% -36.6% 19 Â37% sched_debug.cpu#2.cpu_load[3]
256 Â22% -36.9% 162 Â27% sched_debug.cfs_rq[2]:/.tg_load_contrib
841 Â 7% -13.0% 731 Â 7% sched_debug.cfs_rq[1]:/.tg_load_contrib
699 Â10% -16.2% 586 Â11% sched_debug.cfs_rq[1]:/.blocked_load_avg
518031 Â 4% +14.6% 593471 Â 7% cpuidle.C2-ATM.time
126580 Â11% -28.2% 90853 Â12% sched_debug.cfs_rq[1]:/.min_vruntime
42977 Â 6% -8.3% 39399 Â 8% sched_debug.cpu#0.nr_load_updates
37026 Â13% -31.0% 25539 Â 4% softirqs.RCU
1286 Â 2% -26.1% 950 Â 7% cpuidle.C2-ATM.usage
25472 Â 7% -23.7% 19445 Â 2% softirqs.SCHED
2017 Â 5% -11.5% 1785 Â 1% sched_debug.cfs_rq[2]:/.tg_load_avg
87377 Â 8% -10.7% 77993 Â 2% softirqs.TIMER

testbox/testcase/testparams: lkp-sb03/nepim/300s-25%-udp6

71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
-317754 Â-1% -4.0% -305106 Â-2% sched_debug.cfs_rq[11]:/.spread0
14 Â45% -100.0% 0 Â 0% sched_debug.cpu#18.cpu_load[0]
-320520 Â-1% -3.2% -310228 Â-1% sched_debug.cfs_rq[19]:/.spread0
2408 Â36% -79.9% 484 Â13% sched_debug.cpu#14.ttwu_count
-317961 Â-1% -3.7% -306068 Â-2% sched_debug.cfs_rq[13]:/.spread0
1377 Â 9% -71.7% 389 Â30% sched_debug.cpu#5.ttwu_local
-315516 Â-1% -4.5% -301211 Â-1% sched_debug.cfs_rq[7]:/.spread0
1644 Â33% -80.3% 324 Â31% sched_debug.cpu#7.ttwu_local
-3 Â-35% +86.3% -6 Â-7% sched_debug.cpu#14.nr_uninterruptible
2011 Â28% -92.0% 160 Â27% sched_debug.cpu#12.ttwu_local
1042 Â41% -93.0% 73 Â41% sched_debug.cpu#14.ttwu_local
6624 Â20% +427.9% 34970 Â49% sched_debug.cpu#25.sched_count
2004 Â18% -81.8% 364 Â24% sched_debug.cpu#4.ttwu_local
828 Â20% +274.7% 3104 Â 1% sched_debug.cfs_rq[2]:/.exec_clock
1346 Â15% -82.0% 242 Â 4% sched_debug.cpu#6.ttwu_local
2859 Â33% -76.8% 662 Â36% sched_debug.cpu#13.ttwu_count
3677 Â20% -72.4% 1016 Â27% sched_debug.cpu#12.ttwu_count
2745 Â 6% +90.3% 5223 Â 2% sched_debug.cpu#16.ttwu_count
6117 Â 7% +471.5% 34961 Â49% sched_debug.cpu#25.nr_switches
2999 Â13% -69.9% 902 Â12% sched_debug.cpu#5.ttwu_count
3415 Â26% -75.8% 827 Â26% sched_debug.cpu#7.ttwu_count
4872 Â 5% +43.1% 6971 Â 1% sched_debug.cpu#16.nr_switches
3900 Â20% -74.8% 984 Â14% sched_debug.cpu#4.ttwu_count
8394 Â15% -35.2% 5441 Â24% sched_debug.cpu#8.sched_count
58 Â22% +61.0% 94 Â18% sched_debug.cpu#30.ttwu_local
2669 Â20% -81.5% 492 Â14% sched_debug.cpu#2.ttwu_local
807 Â18% +93.7% 1563 Â 0% sched_debug.cpu#16.ttwu_local
1278 Â 9% -15.0% 1087 Â 1% sched_debug.cfs_rq[3]:/.exec_clock
5841 Â10% +305.6% 23695 Â45% sched_debug.cpu#25.ttwu_count
8530 Â30% -49.4% 4317 Â 8% sched_debug.cpu#13.sched_count
3003 Â25% -76.9% 693 Â12% sched_debug.cpu#6.ttwu_count
161 Â23% +55.5% 251 Â18% sched_debug.cpu#22.ttwu_count
6519 Â18% +90.7% 12431 Â19% sched_debug.cpu#30.nr_load_updates
3305 Â20% -60.5% 1304 Â18% sched_debug.cpu#11.ttwu_local
4883 Â 5% +43.0% 6982 Â 1% sched_debug.cpu#16.sched_count
0.13 Â 8% +56.1% 0.20 Â23% turbostat.%pc2
2086 Â 4% +21.1% 2527 Â 3% sched_debug.cpu#16.sched_goidle
5017 Â21% -61.6% 1927 Â37% sched_debug.cpu#2.ttwu_count
6689 Â11% -44.2% 3735 Â19% sched_debug.cpu#6.nr_switches
19 Â39% -59.2% 8 Â17% sched_debug.cpu#18.cpu_load[1]
1924 Â32% +50.9% 2904 Â11% sched_debug.cpu#9.nr_switches
3912 Â 6% -42.5% 2251 Â 3% sched_debug.cpu#5.sched_goidle
5781 Â13% -36.9% 3649 Â 4% sched_debug.cpu#12.sched_goidle
11126 Â 7% -23.4% 8520 Â 1% sched_debug.cpu#3.sched_goidle
3323 Â11% -45.1% 1825 Â19% sched_debug.cpu#6.sched_goidle
7285 Â14% -40.9% 4305 Â 8% sched_debug.cpu#13.nr_switches
3566 Â16% -45.0% 1963 Â25% sched_debug.cpu#8.sched_goidle
7022 Â11% -41.2% 4132 Â 6% sched_debug.cpu#11.ttwu_count
14533 Â22% +43.8% 20896 Â 5% sched_debug.cpu#0.sched_count
13253 Â12% -43.9% 7439 Â 4% sched_debug.cpu#12.sched_count
2426 Â 7% +24.2% 3014 Â11% sched_debug.cpu#25.sched_goidle
797026 Â37% -43.9% 446843 Â10% cpuidle.C3-SNB.time
7893 Â 6% -41.8% 4591 Â 4% sched_debug.cpu#5.nr_switches
7392 Â16% +74.9% 12931 Â24% sched_debug.cpu#28.nr_load_updates
3612 Â14% -42.2% 2086 Â 8% sched_debug.cpu#13.sched_goidle
937 Â34% +53.1% 1435 Â11% sched_debug.cpu#9.sched_goidle
5792 Â 6% -39.9% 3479 Â 1% sched_debug.cpu#4.sched_goidle
11646 Â 7% -39.3% 7073 Â 1% sched_debug.cpu#4.nr_switches
1794 Â17% -23.1% 1380 Â17% sched_debug.cfs_rq[31]:/.exec_clock
22924 Â 7% -23.6% 17516 Â 4% sched_debug.cpu#11.nr_switches
2633 Â19% +64.0% 4319 Â30% sched_debug.cfs_rq[19]:/.min_vruntime
59079 Â32% -47.8% 30837 Â22% sched_debug.cpu#2.sched_count
9835 Â12% +55.6% 15307 Â19% sched_debug.cpu#16.nr_load_updates
22381 Â 7% -22.4% 17362 Â 1% sched_debug.cpu#3.nr_switches
2 Â42% +94.4% 4 Â10% sched_debug.cfs_rq[10]:/.nr_spread_over
6781 Â17% +81.3% 12296 Â23% sched_debug.cpu#22.nr_load_updates
24758 Â14% -28.0% 17837 Â 5% sched_debug.cpu#11.sched_count
7096 Â 2% +14.7% 8136 Â 5% sched_debug.cpu#0.nr_switches
6661 Â17% +78.0% 11853 Â24% sched_debug.cpu#29.nr_load_updates
6785 Â18% +78.1% 12082 Â24% sched_debug.cpu#21.nr_load_updates
6854 Â12% -38.8% 4194 Â 4% sched_debug.cpu#3.ttwu_count
7612 Â15% +70.9% 13007 Â23% sched_debug.cpu#20.nr_load_updates
11659 Â12% -36.3% 7426 Â 4% sched_debug.cpu#12.nr_switches
11402 Â 7% -24.8% 8571 Â 4% sched_debug.cpu#11.sched_goidle
7111 Â15% +77.7% 12637 Â23% sched_debug.cpu#17.nr_load_updates
6973 Â16% +75.0% 12203 Â24% sched_debug.cpu#31.nr_load_updates
6809 Â17% +78.1% 12129 Â25% sched_debug.cpu#24.nr_load_updates
7591 Â15% +72.5% 13093 Â22% sched_debug.cpu#18.nr_load_updates
7391 Â19% -50.8% 3634 Â16% sched_debug.cpu#7.nr_switches
12020 Â12% -39.8% 7242 Â14% sched_debug.cpu#2.nr_switches
5974 Â13% -41.0% 3522 Â14% sched_debug.cpu#2.sched_goidle
3666 Â20% -51.8% 1768 Â16% sched_debug.cpu#7.sched_goidle
5384 Â25% -58.7% 2223 Â14% sched_debug.cpu#1.sched_goidle
6942 Â16% +87.4% 13011 Â22% sched_debug.cpu#19.nr_load_updates
3090 Â16% -52.0% 1483 Â13% sched_debug.cpu#3.ttwu_local
6633 Â18% +85.6% 12313 Â23% sched_debug.cpu#27.nr_load_updates
6347 Â 2% +38.2% 8770 Â 2% sched_debug.cpu#0.ttwu_count
5397 Â17% +74.9% 9439 Â44% sched_debug.cfs_rq[11]:/.min_vruntime
4066 Â24% -23.3% 3117 Â14% sched_debug.cfs_rq[27]:/.min_vruntime
6801 Â16% +77.5% 12071 Â24% sched_debug.cpu#23.nr_load_updates
7568 Â14% +77.7% 13450 Â22% sched_debug.cpu#9.nr_load_updates
59 Â31% +104.4% 121 Â43% sched_debug.cfs_rq[30]:/.avg->runnable_avg_sum
466 Â14% +90.9% 890 Â24% sched_debug.cpu#19.ttwu_count
9801 Â13% +41.9% 13906 Â21% sched_debug.cpu#6.nr_load_updates
8128 Â25% +58.3% 12867 Â26% sched_debug.cpu#1.nr_load_updates
238 Â12% +84.9% 440 Â31% sched_debug.cpu#18.ttwu_local
6744 Â27% +46.1% 9853 Â10% sched_debug.cfs_rq[1]:/.min_vruntime
10030 Â12% +40.0% 14044 Â21% sched_debug.cpu#5.nr_load_updates
445 Â12% -41.6% 260 Â10% cpuidle.C3-SNB.usage
11607 Â 6% +34.9% 15655 Â15% sched_debug.cpu#15.nr_load_updates
30852509 Â20% +59.8% 49301317 Â13% cpuidle.C1-SNB.time
9340 Â 9% +46.6% 13688 Â19% sched_debug.cpu#14.nr_load_updates
9843 Â13% +45.2% 14288 Â20% sched_debug.cpu#13.nr_load_updates
10191 Â14% +41.5% 14420 Â20% sched_debug.cpu#8.nr_load_updates
12555 Â 8% +26.9% 15934 Â18% sched_debug.cpu#3.nr_load_updates
125011 Â 1% -33.4% 83286 Â 4% cpuidle.C7-SNB.usage
4440 Â 9% +15.2% 5114 Â 2% numa-vmstat.node1.nr_slab_reclaimable
17765 Â 9% +15.2% 20459 Â 2% numa-meminfo.node1.SReclaimable
20407 Â 7% -14.5% 17450 Â 3% numa-meminfo.node0.SReclaimable
5101 Â 7% -14.5% 4362 Â 3% numa-vmstat.node0.nr_slab_reclaimable
39288 Â 0% +48.6% 58382 Â 5% softirqs.SCHED
385696 Â 0% +12.4% 433713 Â 1% softirqs.TIMER
312 Â 0% +12223.0% 38521 Â 1% time.involuntary_context_switches
2333 Â 0% -4.2% 2234 Â 0% vmstat.system.in

testbox/testcase/testparams: brickland1/boot/1

71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
0 +Inf% 1 dmesg.BUG:soft_lockup-CPU_stuck_for_s
0 +Inf% 1 dmesg.Kernel_panic-not_syncing:softlockup:hung_tasks
0 +Inf% 1 last_state.is_incomplete_run
0 +Inf% 1 last_state.booting

testbox/testcase/testparams: lkp-sb03/nepim/300s-100%-udp

71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
-332412 Â-2% -31.4% -228062 Â-41% sched_debug.cfs_rq[24]:/.spread0
-332189 Â-1% -29.6% -233944 Â-42% sched_debug.cfs_rq[21]:/.spread0
-333098 Â-1% -29.6% -234545 Â-42% sched_debug.cfs_rq[30]:/.spread0
-331878 Â-2% -31.4% -227531 Â-45% sched_debug.cfs_rq[14]:/.spread0
-320408 Â-6% -29.2% -226996 Â-42% sched_debug.cfs_rq[12]:/.spread0
-329173 Â-2% -30.6% -228514 Â-42% sched_debug.cfs_rq[11]:/.spread0
-328417 Â-3% -29.4% -231861 Â-42% sched_debug.cfs_rq[15]:/.spread0
-326879 Â-1% -30.2% -228116 Â-43% sched_debug.cfs_rq[17]:/.spread0
-333639 Â-1% -30.3% -232642 Â-42% sched_debug.cfs_rq[23]:/.spread0
-330937 Â-1% -30.5% -230138 Â-43% sched_debug.cfs_rq[18]:/.spread0
-333169 Â-1% -29.8% -233884 Â-42% sched_debug.cfs_rq[29]:/.spread0
-332021 Â-1% -29.9% -232732 Â-43% sched_debug.cfs_rq[31]:/.spread0
-332316 Â-1% -29.6% -233922 Â-43% sched_debug.cfs_rq[22]:/.spread0
-329709 Â 0% -32.1% -223771 Â-43% sched_debug.cfs_rq[5]:/.spread0
-324925 Â-1% -33.7% -215479 Â-46% sched_debug.cfs_rq[3]:/.spread0
-331383 Â-1% -30.0% -231988 Â-43% sched_debug.cfs_rq[19]:/.spread0
-328445 Â 0% -31.3% -225649 Â-43% sched_debug.cfs_rq[2]:/.spread0
-330852 Â-1% -29.9% -231837 Â-43% sched_debug.cfs_rq[20]:/.spread0
-330882 Â-1% -31.1% -227986 Â-43% sched_debug.cfs_rq[6]:/.spread0
-331089 Â-2% -37.4% -207235 Â-41% sched_debug.cfs_rq[26]:/.spread0
-332707 Â-1% -35.2% -215441 Â-41% sched_debug.cfs_rq[16]:/.spread0
2896 Â32% -73.8% 758 Â28% sched_debug.cpu#14.ttwu_count
-329983 Â-1% -30.9% -227891 Â-46% sched_debug.cfs_rq[27]:/.spread0
-321000 Â-3% -28.5% -229602 Â-43% sched_debug.cfs_rq[13]:/.spread0
-327709 Â 0% -32.5% -221267 Â-45% sched_debug.cfs_rq[4]:/.spread0
-328881 Â-1% -31.1% -226445 Â-47% sched_debug.cfs_rq[7]:/.spread0
1859 Â29% -86.7% 247 Â38% sched_debug.cpu#12.ttwu_local
6731 Â 6% +32.5% 8916 Â10% sched_debug.cpu#25.sched_count
661 Â17% +493.7% 3925 Â24% sched_debug.cfs_rq[2]:/.exec_clock
-334661 Â-1% -30.6% -232419 Â-42% sched_debug.cfs_rq[9]:/.spread0
1379 Â 6% -85.7% 197 Â32% sched_debug.cpu#6.ttwu_local
2949 Â18% -54.7% 1337 Â38% sched_debug.cpu#13.ttwu_count
3358 Â32% -64.7% 1184 Â40% sched_debug.cpu#12.ttwu_count
6723 Â 6% +25.2% 8419 Â 6% sched_debug.cpu#25.nr_switches
3130 Â 2% -70.6% 920 Â37% sched_debug.cpu#5.ttwu_count
2865 Â21% -71.7% 810 Â16% sched_debug.cpu#7.ttwu_count
95 Â38% -48.1% 49 Â34% sched_debug.cfs_rq[13]:/.avg->runnable_avg_sum
0 Â 0% +Inf% 1 Â 0% sched_debug.cpu#29.cpu_load[3]
118.77 Â47% +320.1% 498.91 Â34% sched_debug.cfs_rq[9]:/.exec_clock
12507 Â 3% +184.0% 35517 Â42% sched_debug.cpu#26.nr_load_updates
59 Â10% +34.1% 80 Â18% sched_debug.cpu#30.ttwu_local
2042 Â32% -80.2% 404 Â35% sched_debug.cpu#2.ttwu_local
52 Â 9% +51.0% 79 Â24% sched_debug.cpu#23.ttwu_local
4980 Â27% -46.4% 2670 Â22% sched_debug.cpu#15.sched_goidle
5708 Â 3% -12.7% 4983 Â 9% sched_debug.cpu#20.ttwu_count
3175 Â16% -67.7% 1025 Â45% sched_debug.cpu#6.ttwu_count
2500 Â11% -47.5% 1313 Â14% sched_debug.cpu#11.ttwu_local
8413 Â22% -60.3% 3343 Â26% sched_debug.cpu#10.sched_goidle
7474 Â 7% -28.7% 5329 Â 2% sched_debug.cpu#14.nr_switches
1275 Â 3% +44.5% 1843 Â 4% sched_debug.cpu#25.ttwu_local
3774 Â25% -60.1% 1506 Â27% sched_debug.cpu#2.ttwu_count
7163 Â 2% -51.5% 3473 Â18% sched_debug.cpu#6.nr_switches
3206 Â 7% -37.0% 2020 Â27% sched_debug.cpu#20.curr->pid
7 Â25% +86.4% 13 Â29% sched_debug.cfs_rq[25]:/.nr_spread_over
3709 Â 7% -29.5% 2615 Â 2% sched_debug.cpu#14.sched_goidle
10 Â29% +116.7% 21 Â43% sched_debug.cfs_rq[11]:/.tg_runnable_contrib
490 Â27% +110.7% 1032 Â41% sched_debug.cfs_rq[11]:/.avg->runnable_avg_sum
3762 Â 0% -34.3% 2470 Â 6% sched_debug.cpu#5.sched_goidle
5345 Â20% -29.4% 3775 Â15% sched_debug.cpu#12.sched_goidle
599 Â12% -23.7% 457 Â 7% sched_debug.cfs_rq[18]:/.tg_load_contrib
10630 Â 3% -23.4% 8143 Â 9% sched_debug.cpu#3.sched_goidle
139.20 Â49% -42.5% 80.00 Â 6% sched_debug.cfs_rq[30]:/.exec_clock
584 Â33% -41.1% 344 Â12% sched_debug.cpu#21.ttwu_count
595 Â11% -25.5% 444 Â 8% sched_debug.cfs_rq[18]:/.blocked_load_avg
41 Â 9% -37.9% 25 Â20% sched_debug.cpu#20.cpu_load[3]
3561 Â 2% -52.4% 1693 Â20% sched_debug.cpu#6.sched_goidle
7825 Â12% -35.1% 5081 Â20% sched_debug.cpu#13.nr_switches
6110 Â10% -32.5% 4123 Â 4% sched_debug.cpu#11.ttwu_count
121 Â 7% -32.9% 81 Â18% sched_debug.cpu#20.cpu_load[1]
539 Â49% +111.1% 1139 Â11% sched_debug.cpu#11.curr->pid
28969 Â 6% -11.8% 25554 Â 6% proc-vmstat.pgalloc_dma32
21 Â 9% -38.5% 13 Â24% sched_debug.cpu#20.cpu_load[4]
364837 Â28% +80.5% 658376 Â39% cpuidle.C3-SNB.time
7578 Â 1% -33.2% 5062 Â 7% sched_debug.cpu#5.nr_switches
161 Â 7% -30.8% 111 Â17% sched_debug.cpu#20.cpu_load[0]
3866 Â12% -35.5% 2493 Â20% sched_debug.cpu#13.sched_goidle
2670 Â 2% +96.8% 5255 Â 1% sched_debug.cpu#17.ttwu_count
12432 Â 5% +168.0% 33312 Â27% sched_debug.cpu#16.nr_load_updates
21409 Â 3% -22.4% 16617 Â 9% sched_debug.cpu#3.nr_switches
418 Â37% -46.1% 225 Â37% sched_debug.cfs_rq[20]:/.tg_load_contrib
10 Â 4% +45.2% 15 Â23% sched_debug.cfs_rq[17]:/.tg_runnable_contrib
6560 Â 5% -40.9% 3876 Â 4% sched_debug.cpu#3.ttwu_count
10781 Â21% -28.5% 7712 Â15% sched_debug.cpu#12.nr_switches
15101 Â 5% +89.1% 28556 Â49% sched_debug.cpu#17.nr_load_updates
492 Â 5% +45.8% 718 Â22% sched_debug.cfs_rq[17]:/.avg->runnable_avg_sum
12110 Â 5% +131.6% 28045 Â45% sched_debug.cpu#24.nr_load_updates
3021 Â44% -52.8% 1426 Â28% sched_debug.cfs_rq[30]:/.min_vruntime
163 Â 6% -33.9% 108 Â20% sched_debug.cfs_rq[20]:/.runnable_load_avg
7318 Â 9% -33.1% 4898 Â15% sched_debug.cpu#7.nr_switches
10903 Â14% -25.7% 8103 Â 1% sched_debug.cpu#2.nr_switches
5414 Â14% -27.2% 3943 Â 1% sched_debug.cpu#2.sched_goidle
3639 Â 9% -33.4% 2423 Â15% sched_debug.cpu#7.sched_goidle
84 Â43% +51.2% 128 Â 2% sched_debug.cpu#11.load
5531 Â 4% -62.4% 2080 Â26% sched_debug.cpu#1.sched_goidle
2894 Â 7% -57.2% 1239 Â17% sched_debug.cpu#3.ttwu_local
2128 Â 6% +20.7% 2569 Â 5% sched_debug.cpu#17.sched_goidle
7757 Â10% +109.8% 16272 Â42% sched_debug.cpu#0.ttwu_count
3802 Â16% -46.1% 2048 Â28% sched_debug.cfs_rq[22]:/.min_vruntime
162 Â 6% -33.5% 108 Â20% sched_debug.cfs_rq[20]:/.load
750 Â14% +122.9% 1671 Â 1% sched_debug.cpu#17.ttwu_local
4961 Â 5% +42.8% 7085 Â 3% sched_debug.cpu#17.nr_switches
162 Â 6% -33.5% 108 Â20% sched_debug.cpu#20.load
74 Â 8% -35.1% 48 Â19% sched_debug.cpu#20.cpu_load[2]
3929 Â38% -48.4% 2026 Â23% sched_debug.cfs_rq[21]:/.min_vruntime
283094 Â 7% -12.0% 249151 Â 3% numa-numastat.node0.local_node
283099 Â 7% -12.0% 249158 Â 3% numa-numastat.node0.numa_hit
89434 Â 7% -13.0% 77829 Â 3% meminfo.DirectMap4k
525 Â30% +57.0% 824 Â22% sched_debug.cpu#19.ttwu_count
557 Â 9% +40.7% 785 Â17% sched_debug.cfs_rq[18]:/.exec_clock
16533 Â 4% +77.6% 29359 Â43% sched_debug.cpu#4.nr_load_updates
15325 Â 4% +83.0% 28049 Â47% sched_debug.cpu#5.nr_load_updates
153651 Â 0% -29.4% 108464 Â44% sched_debug.cpu#0.nr_load_updates
154973 Â 0% -32.2% 105144 Â46% sched_debug.cfs_rq[0]:/.exec_clock
336114 Â 1% -29.8% 235968 Â42% sched_debug.cfs_rq[0]:/.min_vruntime
17101 Â 2% +67.3% 28611 Â49% sched_debug.cpu#11.nr_load_updates
16749 Â 8% +65.9% 27795 Â48% sched_debug.cpu#12.nr_load_updates
18016 Â 6% +67.5% 30176 Â46% sched_debug.cpu#3.nr_load_updates
267383 Â 7% +12.9% 301747 Â 2% numa-numastat.node1.numa_hit
267353 Â 7% +12.9% 301717 Â 2% numa-numastat.node1.local_node
9873 Â 4% -11.7% 8721 Â 1% slabinfo.kmalloc-192.num_objs
7870 Â 4% -9.5% 7121 Â 6% numa-vmstat.node1.nr_slab_unreclaimable
9873 Â 4% -11.7% 8721 Â 1% slabinfo.kmalloc-192.active_objs
4392 Â 5% -10.4% 3933 Â 7% sched_debug.cpu#20.nr_switches
5151 Â 1% -11.8% 4542 Â 5% numa-vmstat.node1.nr_slab_reclaimable
20608 Â 1% -11.8% 18170 Â 5% numa-meminfo.node1.SReclaimable
45893 Â 2% +43.5% 65872 Â12% softirqs.SCHED
31483 Â 4% -9.5% 28488 Â 6% numa-meminfo.node1.SUnreclaim
27 Â36% -41.0% 16 Â20% sched_debug.cfs_rq[20]:/.tg_runnable_contrib
1310 Â36% -40.6% 778 Â20% sched_debug.cfs_rq[20]:/.avg->runnable_avg_sum
52092 Â 3% -10.4% 46658 Â 5% numa-meminfo.node1.Slab
17004 Â 7% -13.8% 14657 Â 6% slabinfo.vm_area_struct.num_objs
16843 Â 8% -14.1% 14468 Â 4% slabinfo.vm_area_struct.active_objs
395985 Â 1% +10.1% 436060 Â 0% softirqs.TIMER

testbox/testcase/testparams: lkp-sb03/nepim/300s-25%-tcp

71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
10695 Â 0% -2.1% 10467 Â 1% nepim.tcp.avg.snd_s
2803859 Â 0% -2.1% 2743881 Â 1% nepim.tcp.avg.kbps_out
2804100 Â 0% -2.2% 2743293 Â 1% nepim.tcp.avg.kbps_in
10698 Â 0% -2.2% 10467 Â 1% nepim.tcp.avg.rcv_s
-121719 Â-8% -25.3% -90877 Â-22% sched_debug.cfs_rq[24]:/.spread0
-121918 Â-7% -23.1% -93789 Â-26% sched_debug.cfs_rq[30]:/.spread0
-120019 Â-7% -34.1% -79077 Â-44% sched_debug.cfs_rq[14]:/.spread0
-111279 Â-3% -26.7% -81532 Â-45% sched_debug.cfs_rq[12]:/.spread0
-116127 Â-7% -24.5% -87649 Â-26% sched_debug.cfs_rq[17]:/.spread0
-120630 Â-7% -23.7% -92046 Â-22% sched_debug.cfs_rq[23]:/.spread0
-120159 Â-6% -22.6% -93007 Â-25% sched_debug.cfs_rq[18]:/.spread0
-115146 Â-8% -27.5% -83505 Â-29% sched_debug.cfs_rq[25]:/.spread0
-120796 Â-6% -21.9% -94345 Â-23% sched_debug.cfs_rq[29]:/.spread0
-120845 Â-7% -24.7% -91044 Â-28% sched_debug.cfs_rq[31]:/.spread0
-42392 Â-14% -34.3% -27870 Â-8% sched_debug.cfs_rq[1]:/.spread0
-114000 Â-8% -32.0% -77568 Â-27% sched_debug.cfs_rq[5]:/.spread0
-104612 Â-9% -31.4% -71780 Â-27% sched_debug.cfs_rq[3]:/.spread0
-109295 Â-10% -43.2% -62052 Â-49% sched_debug.cfs_rq[2]:/.spread0
-113348 Â-4% -32.0% -77037 Â-22% sched_debug.cfs_rq[6]:/.spread0
1399 Â 3% +293.1% 5501 Â48% sched_debug.cpu#26.sched_count
-120334 Â-6% -23.7% -91799 Â-25% sched_debug.cfs_rq[26]:/.spread0
-122092 Â-7% -22.2% -94988 Â-23% sched_debug.cfs_rq[16]:/.spread0
-119413 Â-6% -20.7% -94724 Â-24% sched_debug.cfs_rq[27]:/.spread0
4257 Â 8% -90.8% 391 Â48% sched_debug.cpu#5.ttwu_local
-112177 Â-8% -29.2% -79394 Â-24% sched_debug.cfs_rq[4]:/.spread0
-114081 Â-5% -31.0% -78750 Â-25% sched_debug.cfs_rq[7]:/.spread0
-118923 Â-6% -26.3% -87677 Â-31% sched_debug.cfs_rq[28]:/.spread0
1611 Â14% -39.2% 979 Â43% sched_debug.cpu#13.ttwu_local
1403 Â20% -68.4% 444 Â42% sched_debug.cpu#12.ttwu_local
8 Â17% -45.8% 4 Â28% sched_debug.cpu#19.nr_uninterruptible
10328 Â24% +393.1% 50927 Â 9% sched_debug.cpu#25.sched_count
29 Â 9% -34.8% 19 Â 4% sched_debug.cpu#25.nr_uninterruptible
892 Â23% -75.7% 216 Â21% sched_debug.cfs_rq[19]:/.exec_clock
10284 Â24% +394.0% 50801 Â 9% sched_debug.cpu#25.nr_switches
1386 Â 3% +296.0% 5489 Â48% sched_debug.cpu#26.nr_switches
53 Â15% +44.0% 76 Â23% sched_debug.cpu#16.ttwu_local
14 Â15% +83.3% 25 Â30% numa-numastat.node0.other_node
954 Â 8% +408.1% 4849 Â44% sched_debug.cpu#26.ttwu_count
3148 Â 7% +87.6% 5907 Â12% sched_debug.cpu#25.ttwu_count
185.51 Â26% -30.6% 128.82 Â 6% sched_debug.cfs_rq[22]:/.exec_clock
1169 Â16% +81.6% 2124 Â17% sched_debug.cpu#25.ttwu_local
90119 Â11% -42.8% 51537 Â32% sched_debug.cfs_rq[8]:/.exec_clock
176077 Â10% -40.2% 105252 Â29% sched_debug.cfs_rq[8]:/.min_vruntime
176 Â 2% -42.5% 101 Â42% sched_debug.cpu#0.cpu_load[3]
9192 Â17% +59.2% 14633 Â10% sched_debug.cfs_rq[25]:/.min_vruntime
1159 Â 9% -19.8% 929 Â15% sched_debug.cpu#22.sched_count
427 Â 6% -34.1% 281 Â42% sched_debug.cfs_rq[9]:/.tg_runnable_contrib
19503 Â 6% -34.1% 12843 Â42% sched_debug.cfs_rq[9]:/.avg->runnable_avg_sum
839 Â 5% +24.5% 1045 Â20% sched_debug.cpu#11.curr->pid
14 Â18% +69.8% 24 Â32% sched_debug.cfs_rq[21]:/.tg_runnable_contrib
4588 Â27% +432.3% 24425 Â10% sched_debug.cpu#25.sched_goidle
205 Â 7% -43.6% 115 Â44% sched_debug.cfs_rq[8]:/.load
2524 Â41% -58.1% 1056 Â14% sched_debug.cpu#19.sched_count
185 Â 2% -38.1% 115 Â32% sched_debug.cpu#0.cpu_load[4]
205 Â 7% -43.6% 115 Â43% sched_debug.cpu#8.load
2248.86 Â 5% +30.0% 2922.92 Â25% sched_debug.cfs_rq[17]:/.exec_clock
139 Â 7% -35.2% 90 Â42% sched_debug.cpu#9.cpu_load[2]
875 Â26% +100.7% 1757 Â18% sched_debug.cpu#31.sched_count
123 Â 8% -35.0% 80 Â42% sched_debug.cpu#9.cpu_load[1]
1195 Â28% +207.3% 3674 Â43% sched_debug.cfs_rq[31]:/.exec_clock
5340 Â21% -56.6% 2318 Â17% sched_debug.cfs_rq[19]:/.min_vruntime
791 Â25% -38.1% 490 Â11% sched_debug.cpu#30.nr_switches
827 Â 8% -45.4% 451 Â16% sched_debug.cpu#19.sched_goidle
140 Â 6% -39.1% 85 Â47% sched_debug.cpu#9.load
863 Â27% +102.1% 1745 Â18% sched_debug.cpu#31.nr_switches
701 Â17% +62.8% 1141 Â31% sched_debug.cfs_rq[21]:/.avg->runnable_avg_sum
7 Â42% +76.2% 12 Â26% sched_debug.cpu#21.cpu_load[4]
359 Â26% -42.1% 208 Â12% sched_debug.cpu#30.sched_goidle
5844 Â 8% +35.5% 7918 Â 6% sched_debug.cpu#17.sched_count
803 Â25% -37.4% 502 Â11% sched_debug.cpu#30.sched_count
2421 Â36% +79.6% 4348 Â40% sched_debug.cfs_rq[30]:/.min_vruntime
257 Â43% -42.6% 147 Â25% sched_debug.cpu#27.ttwu_local
3419 Â27% -63.2% 1256 Â12% sched_debug.cpu#3.ttwu_local
1867 Â34% -57.2% 799 Â 7% proc-vmstat.numa_hint_faults
512960 Â12% -31.0% 354196 Â26% sched_debug.cpu#0.ttwu_count
1094 Â34% -45.8% 592 Â13% proc-vmstat.numa_hint_faults_local
1908 Â33% -55.7% 845 Â 6% proc-vmstat.numa_pte_updates
205 Â 8% -42.4% 118 Â44% sched_debug.cfs_rq[8]:/.tg_load_contrib
97 Â 6% +30.6% 126 Â20% sched_debug.cpu#29.ttwu_local
502 Â10% -19.4% 404 Â18% sched_debug.cpu#22.sched_goidle
935 Â 7% +101.6% 1885 Â 7% sched_debug.cpu#17.ttwu_local
1793 Â 6% -41.8% 1044 Â14% sched_debug.cpu#19.nr_switches
5832 Â 9% +35.5% 7906 Â 6% sched_debug.cpu#17.nr_switches
4005 Â19% +58.2% 6338 Â14% sched_debug.cfs_rq[26]:/.min_vruntime
204 Â 8% -43.8% 115 Â44% sched_debug.cpu#8.cpu_load[0]
261 Â 7% -42.7% 149 Â44% sched_debug.cpu#8.cpu_load[3]
232 Â 8% -42.6% 133 Â44% sched_debug.cpu#8.cpu_load[2]
595 Â 8% -40.3% 355 Â42% sched_debug.cfs_rq[8]:/.tg_runnable_contrib
210 Â 8% -43.0% 119 Â44% sched_debug.cpu#8.cpu_load[1]
5415 Â35% +93.2% 10461 Â45% sched_debug.cfs_rq[28]:/.min_vruntime
27308 Â 8% -40.3% 16304 Â41% sched_debug.cfs_rq[8]:/.avg->runnable_avg_sum
331 Â20% -55.4% 147 Â41% sched_debug.cfs_rq[26]:/.blocked_load_avg
333 Â19% -55.7% 147 Â41% sched_debug.cfs_rq[26]:/.tg_load_contrib
3494 Â20% +103.0% 7094 Â41% sched_debug.cfs_rq[31]:/.min_vruntime
283 Â 7% -42.3% 163 Â44% sched_debug.cpu#8.cpu_load[4]
1146 Â 9% -19.9% 918 Â15% sched_debug.cpu#22.nr_switches
157 Â 7% -34.5% 103 Â41% sched_debug.cpu#9.cpu_load[3]
1885 Â 4% -41.0% 1112 Â42% sched_debug.cpu#8.curr->pid
803 Â16% -31.3% 551 Â11% cpuidle.C3-SNB.usage
1.39e+08 Â 2% +41.2% 1.963e+08 Â19% cpuidle.C1-SNB.time
115 Â 7% -34.2% 75 Â43% sched_debug.cpu#9.cpu_load[0]
34157 Â 2% +50.0% 51248 Â 7% softirqs.RCU
204 Â 8% -43.8% 115 Â44% sched_debug.cfs_rq[8]:/.runnable_load_avg
172 Â 6% -33.7% 114 Â42% sched_debug.cpu#9.cpu_load[4]
60313 Â 8% -23.5% 46115 Â23% sched_debug.cfs_rq[0]:/.exec_clock
7005 Â16% -20.0% 5600 Â 8% sched_debug.cfs_rq[20]:/.min_vruntime
131677 Â 2% -8.8% 120031 Â 5% sched_debug.cpu#8.nr_load_updates
1000000 Â 0% -7.8% 921845 Â 3% sched_debug.cpu#8.avg_idle
305943 Â 0% -14.9% 260489 Â 1% cpuidle.C7-SNB.usage
447676 Â 8% +39.2% 623231 Â12% sched_debug.cpu#9.avg_idle
136899 Â 0% +10.9% 151875 Â 2% softirqs.SCHED
10246 Â 5% +13.2% 11600 Â 8% slabinfo.kmalloc-256.active_objs
3693 Â10% -8.4% 3383 Â 8% sched_debug.cfs_rq[30]:/.tg_load_avg
3701 Â 9% -8.4% 3389 Â 8% sched_debug.cfs_rq[31]:/.tg_load_avg
3682 Â 9% -8.4% 3371 Â 8% sched_debug.cfs_rq[29]:/.tg_load_avg
3667 Â 9% -9.0% 3338 Â 8% sched_debug.cfs_rq[28]:/.tg_load_avg
3660 Â10% -9.0% 3331 Â 7% sched_debug.cfs_rq[27]:/.tg_load_avg
378073 Â 0% +17.8% 445472 Â 6% softirqs.TIMER
633 Â23% +1292.7% 8816 Â15% time.involuntary_context_switches
2298705 Â11% -17.8% 1889948 Â14% time.voluntary_context_switches
1947 Â12% -23.5% 1490 Â 1% time.minor_page_faults
256.98 Â 1% +2.0% 262.20 Â 1% time.system_time
21901 Â 1% -5.2% 20756 Â 3% vmstat.system.cs
3707 Â 1% -4.6% 3538 Â 0% vmstat.system.in

testbox/testcase/testparams: lkp-sb03/nepim/300s-25%-tcp6

71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
9863 Â 0% -2.9% 9577 Â 1% nepim.tcp.avg.snd_s
2585647 Â 0% -2.9% 2510718 Â 1% nepim.tcp.avg.kbps_out
2585813 Â 0% -2.9% 2510916 Â 1% nepim.tcp.avg.kbps_in
9866 Â 0% -2.9% 9584 Â 1% nepim.tcp.avg.rcv_s
-134976 Â-16% -40.4% -80425 Â-36% sched_debug.cfs_rq[24]:/.spread0
-132063 Â-16% -43.2% -74963 Â-37% sched_debug.cfs_rq[21]:/.spread0
-133375 Â-15% -39.8% -80283 Â-35% sched_debug.cfs_rq[30]:/.spread0
-6 Â-28% -50.0% -3 Â-37% sched_debug.cpu#12.nr_uninterruptible
-128176 Â-14% -40.9% -75751 Â-38% sched_debug.cfs_rq[17]:/.spread0
-133880 Â-15% -42.4% -77048 Â-33% sched_debug.cfs_rq[23]:/.spread0
-132298 Â-16% -40.7% -78493 Â-37% sched_debug.cfs_rq[18]:/.spread0
-126650 Â-16% -42.8% -72400 Â-38% sched_debug.cfs_rq[25]:/.spread0
-133032 Â-15% -42.0% -77124 Â-32% sched_debug.cfs_rq[29]:/.spread0
-131725 Â-17% -39.8% -79248 Â-36% sched_debug.cfs_rq[31]:/.spread0
-133654 Â-16% -41.6% -78031 Â-36% sched_debug.cfs_rq[22]:/.spread0
-124091 Â-18% -60.9% -48524 Â-20% sched_debug.cfs_rq[5]:/.spread0
-122612 Â-21% -67.6% -39778 Â-35% sched_debug.cfs_rq[3]:/.spread0
-130477 Â-17% -40.8% -77255 Â-36% sched_debug.cfs_rq[19]:/.spread0
-108967 Â-27% -61.8% -41596 Â-35% sched_debug.cfs_rq[2]:/.spread0
-129246 Â-17% -44.3% -72006 Â-36% sched_debug.cfs_rq[20]:/.spread0
-132670 Â-16% -41.1% -78095 Â-36% sched_debug.cfs_rq[26]:/.spread0
-130968 Â-19% -44.9% -72217 Â-38% sched_debug.cfs_rq[16]:/.spread0
-133731 Â-16% -42.8% -76524 Â-33% sched_debug.cfs_rq[27]:/.spread0
4116 Â32% -83.3% 686 Â40% sched_debug.cpu#5.ttwu_local
-126780 Â-13% -64.4% -45152 Â-26% sched_debug.cfs_rq[4]:/.spread0
-128099 Â-16% -60.8% -50243 Â-43% sched_debug.cfs_rq[7]:/.spread0
-132645 Â-15% -42.4% -76383 Â-36% sched_debug.cfs_rq[28]:/.spread0
2271 Â26% -65.7% 780 Â20% sched_debug.cpu#4.ttwu_local
8273 Â 9% -45.1% 4538 Â48% cpuidle.C6-SNB.time
5 Â 8% -50.0% 2 Â35% sched_debug.cpu#26.cpu_load[2]
9944 Â42% +275.2% 37311 Â45% sched_debug.cfs_rq[4]:/.min_vruntime
5 Â16% -46.7% 2 Â35% sched_debug.cpu#26.cpu_load[3]
3686 Â16% +66.2% 6129 Â 1% sched_debug.cpu#25.ttwu_count
1411 Â14% -42.3% 813 Â15% sched_debug.cpu#24.nr_switches
1423 Â14% -42.0% 825 Â15% sched_debug.cpu#24.sched_count
791 Â17% +87.4% 1483 Â 4% sched_debug.cfs_rq[28]:/.exec_clock
375 Â16% +74.3% 654 Â22% sched_debug.cpu#27.ttwu_count
2514 Â18% -49.1% 1281 Â 9% sched_debug.cpu#11.ttwu_local
166 Â14% +90.2% 316 Â16% sched_debug.cpu#21.ttwu_local
22218 Â48% +910.3% 224477 Â48% sched_debug.cpu#4.sched_count
192.76 Â48% +437.1% 1035.32 Â48% sched_debug.cfs_rq[22]:/.exec_clock
72170 Â18% -53.8% 33348 Â46% sched_debug.cfs_rq[8]:/.exec_clock
141319 Â16% -51.4% 68708 Â42% sched_debug.cfs_rq[8]:/.min_vruntime
675 Â17% -46.2% 363 Â18% sched_debug.cpu#24.sched_goidle
223 Â 4% -55.6% 99 Â43% sched_debug.cpu#0.cpu_load[2]
3102 Â 9% -67.4% 1010 Â24% sched_debug.cpu#20.curr->pid
8 Â 5% -34.6% 5 Â22% sched_debug.cfs_rq[25]:/.nr_spread_over
243 Â 5% -53.7% 112 Â40% sched_debug.cpu#0.cpu_load[3]
943654 Â 0% -7.9% 869181 Â 5% sched_debug.cpu#3.avg_idle
375 Â25% -56.2% 164 Â27% sched_debug.cfs_rq[18]:/.tg_load_contrib
200 Â 3% -57.8% 84 Â49% sched_debug.cpu#0.cpu_load[1]
375 Â25% -56.2% 164 Â27% sched_debug.cfs_rq[18]:/.blocked_load_avg
82 Â 8% -66.8% 27 Â36% sched_debug.cpu#20.cpu_load[3]
0.64 Â22% -64.1% 0.23 Â14% turbostat.%c3
161 Â 7% -70.4% 47 Â26% sched_debug.cpu#20.cpu_load[1]
327 Â15% -26.4% 241 Â16% sched_debug.cpu#26.ttwu_local
47 Â 8% -60.8% 18 Â45% sched_debug.cpu#20.cpu_load[4]
151 Â 2% -40.7% 89 Â48% sched_debug.cfs_rq[8]:/.load
257 Â 5% -52.1% 123 Â39% sched_debug.cpu#0.cpu_load[4]
151 Â 2% -39.0% 92 Â44% sched_debug.cpu#8.load
165 Â 7% -69.2% 51 Â23% sched_debug.cpu#20.cpu_load[0]
7874 Â 9% -33.1% 5264 Â23% sched_debug.cpu#0.ttwu_local
14892 Â 1% +242.6% 51016 Â49% numa-vmstat.node1.numa_other
310 Â 2% -37.6% 194 Â45% sched_debug.cfs_rq[20]:/.tg_load_contrib
4812 Â16% +144.7% 11778 Â24% sched_debug.cpu#28.ttwu_count
88 Â 0% -41.3% 51 Â48% sched_debug.cpu#1.cpu_load[0]
84 Â14% +43.3% 121 Â21% sched_debug.cpu#27.ttwu_local
165 Â 7% -70.2% 49 Â21% sched_debug.cfs_rq[20]:/.runnable_load_avg
120 Â 9% -44.2% 67 Â21% sched_debug.cfs_rq[1]:/.tg_load_contrib
125 Â24% -39.9% 75 Â 4% sched_debug.cpu#11.load
556084 Â26% -46.6% 296758 Â42% sched_debug.cpu#1.sched_goidle
2211 Â14% -29.4% 1560 Â11% sched_debug.cpu#3.ttwu_local
1410 Â32% -45.0% 775 Â26% proc-vmstat.numa_hint_faults
1822 Â45% -84.2% 287 Â 4% sched_debug.cpu#17.sched_goidle
701369 Â16% -58.8% 289218 Â44% sched_debug.cpu#0.ttwu_count
1451 Â31% -43.7% 816 Â24% proc-vmstat.numa_pte_updates
1112623 Â26% -46.3% 597441 Â42% sched_debug.cpu#1.nr_switches
167 Â 7% -70.5% 49 Â21% sched_debug.cfs_rq[20]:/.load
167 Â 7% -70.5% 49 Â21% sched_debug.cpu#20.load
550128 Â 9% +44.6% 795580 Â15% sched_debug.cpu#1.avg_idle
127 Â 8% -69.9% 38 Â30% sched_debug.cpu#20.cpu_load[2]
3 Â12% -45.5% 2 Â40% sched_debug.cpu#18.cpu_load[4]
4664 Â10% +60.8% 7501 Â20% sched_debug.cfs_rq[21]:/.min_vruntime
118108 Â 2% -9.1% 107394 Â 5% sched_debug.cpu#9.nr_load_updates
4084 Â15% +48.9% 6082 Â16% sched_debug.cfs_rq[28]:/.min_vruntime
1506 Â10% +28.5% 1934 Â14% sched_debug.cpu#18.sched_count
282 Â 8% -59.0% 116 Â43% sched_debug.cfs_rq[26]:/.blocked_load_avg
285 Â 9% -58.7% 117 Â40% sched_debug.cfs_rq[26]:/.tg_load_contrib
5004 Â25% -35.7% 3217 Â16% sched_debug.cfs_rq[31]:/.min_vruntime
1131632 Â25% -45.5% 616403 Â42% sched_debug.cpu#1.sched_count
459 Â12% -30.7% 318 Â19% sched_debug.cfs_rq[18]:/.avg->runnable_avg_sum
128 Â 3% -43.8% 72 Â28% sched_debug.cpu#3.load
1466 Â 3% -41.8% 854 Â46% sched_debug.cpu#8.curr->pid
996199 Â 0% -11.1% 885680 Â 8% sched_debug.cpu#6.avg_idle
743 Â16% -32.9% 498 Â10% cpuidle.C3-SNB.usage
1.416e+08 Â11% +34.5% 1.905e+08 Â17% cpuidle.C1-SNB.time
74056 Â16% -45.2% 40609 Â38% sched_debug.cpu#0.nr_load_updates
33054 Â 3% +41.9% 46916 Â 3% softirqs.RCU
70652 Â17% -49.4% 35731 Â41% sched_debug.cfs_rq[0]:/.exec_clock
136723 Â15% -39.7% 82462 Â34% sched_debug.cfs_rq[0]:/.min_vruntime
654 Â10% +27.7% 836 Â17% sched_debug.cpu#18.sched_goidle
125106 Â 4% -11.0% 111302 Â 4% sched_debug.cpu#8.nr_load_updates
122 Â 6% -44.1% 68 Â27% sched_debug.cfs_rq[3]:/.load
9 Â12% -34.5% 6 Â19% sched_debug.cfs_rq[18]:/.tg_runnable_contrib
660329 Â11% +16.4% 768895 Â 4% sched_debug.cpu#11.avg_idle
405 Â 7% +21.1% 490 Â 6% slabinfo.kmem_cache.active_objs
405 Â 7% +21.1% 490 Â 6% slabinfo.kmem_cache.num_objs
1513 Â 5% -49.0% 771 Â25% sched_debug.cpu#3.curr->pid
69624 Â14% +26.2% 87837 Â 3% sched_debug.cpu#2.nr_load_updates
308728 Â 1% -16.1% 258979 Â 0% cpuidle.C7-SNB.usage
988675 Â 0% -9.8% 891351 Â 4% sched_debug.cpu#4.avg_idle
514 Â 5% +16.6% 599 Â 5% slabinfo.kmem_cache_node.active_objs
533 Â 5% +16.0% 618 Â 4% slabinfo.kmem_cache_node.num_objs
1492 Â10% +28.8% 1921 Â14% sched_debug.cpu#18.nr_switches
11153 Â 9% +17.4% 13091 Â 4% slabinfo.kmalloc-256.active_objs
3919 Â 5% -11.6% 3463 Â 7% sched_debug.cfs_rq[24]:/.tg_load_avg
3902 Â 4% -11.3% 3460 Â 7% sched_debug.cfs_rq[15]:/.tg_load_avg
3918 Â 6% -11.7% 3462 Â 6% sched_debug.cfs_rq[19]:/.tg_load_avg
3915 Â 5% -11.8% 3451 Â 8% sched_debug.cfs_rq[25]:/.tg_load_avg
3920 Â 6% -11.8% 3459 Â 6% sched_debug.cfs_rq[20]:/.tg_load_avg
3929 Â 5% -11.4% 3479 Â 6% sched_debug.cfs_rq[21]:/.tg_load_avg
3919 Â 5% -11.1% 3483 Â 7% sched_debug.cfs_rq[18]:/.tg_load_avg
3887 Â 5% -11.4% 3442 Â 7% sched_debug.cfs_rq[9]:/.tg_load_avg
3907 Â 5% -10.6% 3494 Â 7% sched_debug.cfs_rq[23]:/.tg_load_avg
3897 Â 4% -11.1% 3464 Â 7% sched_debug.cfs_rq[12]:/.tg_load_avg
3890 Â 6% -10.8% 3469 Â 8% sched_debug.cfs_rq[30]:/.tg_load_avg
3881 Â 6% -11.2% 3447 Â 7% sched_debug.cfs_rq[31]:/.tg_load_avg
3892 Â 5% -11.4% 3449 Â 7% sched_debug.cfs_rq[8]:/.tg_load_avg
3920 Â 5% -11.1% 3486 Â 6% sched_debug.cfs_rq[22]:/.tg_load_avg
3940 Â 4% -11.5% 3485 Â 6% sched_debug.cfs_rq[1]:/.tg_load_avg
3891 Â 5% -11.3% 3449 Â 7% sched_debug.cfs_rq[26]:/.tg_load_avg
3904 Â 4% -11.2% 3469 Â 7% sched_debug.cfs_rq[11]:/.tg_load_avg
3906 Â 5% -10.7% 3489 Â 6% sched_debug.cfs_rq[16]:/.tg_load_avg
3908 Â 4% -11.7% 3452 Â 7% sched_debug.cfs_rq[10]:/.tg_load_avg
3875 Â 5% -9.9% 3492 Â 8% sched_debug.cfs_rq[29]:/.tg_load_avg
499 Â 4% -11.7% 440 Â 6% numa-vmstat.node0.nr_page_table_pages
3878 Â 4% -10.5% 3472 Â 7% sched_debug.cfs_rq[13]:/.tg_load_avg
3899 Â 6% -10.8% 3479 Â 7% sched_debug.cfs_rq[27]:/.tg_load_avg
3873 Â 5% -10.5% 3466 Â 7% sched_debug.cfs_rq[14]:/.tg_load_avg
3909 Â 4% -10.5% 3498 Â 6% sched_debug.cfs_rq[5]:/.tg_load_avg
1992 Â 4% -11.6% 1761 Â 6% numa-meminfo.node0.PageTables
11313 Â 9% +17.2% 13260 Â 2% slabinfo.kmalloc-256.num_objs
3904 Â 5% -11.6% 3452 Â 6% sched_debug.cfs_rq[3]:/.tg_load_avg
3903 Â 4% -10.5% 3493 Â 6% sched_debug.cfs_rq[6]:/.tg_load_avg
3913 Â 5% -11.6% 3461 Â 6% sched_debug.cfs_rq[2]:/.tg_load_avg
3896 Â 5% -11.1% 3461 Â 7% sched_debug.cfs_rq[7]:/.tg_load_avg
3912 Â 4% -10.9% 3486 Â 6% sched_debug.cfs_rq[4]:/.tg_load_avg
3915 Â 5% -10.9% 3489 Â 6% sched_debug.cfs_rq[17]:/.tg_load_avg
417 Â 5% +1714.5% 7572 Â11% time.involuntary_context_switches
1545546 Â14% +46.1% 2257840 Â 8% time.voluntary_context_switches
92 Â 0% -4.0% 88 Â 1% time.percent_of_cpu_this_job_got
266.95 Â 1% -4.7% 254.49 Â 1% time.system_time
70.07 Â 0% -1.7% 68.85 Â 1% turbostat.Pkg_W
0.00 Â 0% -1.7% 0.00 Â 1% energy.energy-pkg

testbox/testcase/testparams: lkp-a04/netperf/300s-200%-TCP_SENDFILE

71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
357 Â14% +48.2% 529 Â 5% sched_debug.cfs_rq[2]:/.blocked_load_avg
273340 Â 1% +12.5% 307586 Â 8% sched_debug.cfs_rq[1]:/.MIN_vruntime
273341 Â 1% +12.5% 307587 Â 8% sched_debug.cfs_rq[1]:/.max_vruntime
834 Â 7% +22.7% 1024 Â 6% sched_debug.cfs_rq[2]:/.tg_load_contrib
422 Â 6% +7.0% 451 Â 5% sched_debug.cpu#1.cpu_load[0]
17997 Â13% -37.6% 11222 Â19% sched_debug.cpu#0.sched_goidle
58206 Â16% -35.2% 37726 Â 5% meminfo.DirectMap4k
73342 Â 3% -7.3% 67960 Â 4% softirqs.RCU
50 Â40% -60.3% 20 Â41% cpuidle.C2-ATM.usage
19996 Â 2% -12.4% 17512 Â 1% softirqs.SCHED
234 Â 9% -11.5% 207 Â 9% cpuidle.C6-ATM.usage
143 Â 0% -1.2% 141 Â 0% time.percent_of_cpu_this_job_got
415.97 Â 0% -1.1% 411.58 Â 0% time.system_time

testbox/testcase/testparams: lkp-sb03/nepim/300s-25%-udp

71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
613815 Â 1% +7.4% 659052 Â 0% nepim.udp.avg.kbps_out
18732 Â 1% +7.4% 20112 Â 0% nepim.udp.avg.snd_s
-320497 Â 0% -7.2% -297493 Â-2% sched_debug.cfs_rq[24]:/.spread0
-321084 Â 0% -7.2% -297847 Â-2% sched_debug.cfs_rq[21]:/.spread0
-320898 Â 0% -7.1% -298153 Â-2% sched_debug.cfs_rq[30]:/.spread0
-322176 Â 0% -8.2% -295627 Â-2% sched_debug.cfs_rq[14]:/.spread0
-317520 Â-1% -10.0% -285795 Â-6% sched_debug.cfs_rq[12]:/.spread0
-314487 Â-1% -6.1% -295166 Â-1% sched_debug.cfs_rq[11]:/.spread0
-321530 Â 0% -7.6% -296947 Â-2% sched_debug.cfs_rq[15]:/.spread0
-321302 Â 0% -7.2% -298254 Â-2% sched_debug.cfs_rq[17]:/.spread0
-321475 Â 0% -7.5% -297494 Â-1% sched_debug.cfs_rq[23]:/.spread0
-317814 Â 0% -7.3% -294474 Â-2% sched_debug.cfs_rq[18]:/.spread0
-321687 Â 0% -7.7% -296916 Â-2% sched_debug.cfs_rq[29]:/.spread0
-320368 Â 0% -7.3% -296887 Â-2% sched_debug.cfs_rq[31]:/.spread0
-322274 Â 0% -7.5% -298025 Â-1% sched_debug.cfs_rq[22]:/.spread0
-318704 Â 0% -7.9% -293419 Â-2% sched_debug.cfs_rq[1]:/.spread0
-317100 Â 0% -8.0% -291706 Â-1% sched_debug.cfs_rq[5]:/.spread0
-310371 Â-1% -7.8% -286235 Â-1% sched_debug.cfs_rq[3]:/.spread0
-320602 Â 0% -7.6% -296275 Â-1% sched_debug.cfs_rq[19]:/.spread0
-314944 Â-1% -8.4% -288396 Â-2% sched_debug.cfs_rq[2]:/.spread0
-320652 Â 0% -7.6% -296167 Â-2% sched_debug.cfs_rq[20]:/.spread0
-317734 Â 0% -8.1% -292103 Â-2% sched_debug.cfs_rq[6]:/.spread0
-320529 Â 0% -8.3% -293841 Â-2% sched_debug.cfs_rq[26]:/.spread0
-314699 Â 0% -7.1% -292485 Â-2% sched_debug.cfs_rq[16]:/.spread0
2163 Â12% -79.7% 438 Â29% sched_debug.cpu#14.ttwu_count
-321151 Â 0% -8.0% -295420 Â-1% sched_debug.cfs_rq[27]:/.spread0
-319401 Â-1% -7.5% -295562 Â-1% sched_debug.cfs_rq[13]:/.spread0
1421 Â16% -76.9% 329 Â20% sched_debug.cpu#5.ttwu_local
-315126 Â 0% -7.8% -290630 Â-1% sched_debug.cfs_rq[4]:/.spread0
-313540 Â 0% -7.0% -291520 Â-2% sched_debug.cfs_rq[7]:/.spread0
-320276 Â 0% -7.6% -295808 Â-2% sched_debug.cfs_rq[28]:/.spread0
1719 Â 3% -83.4% 284 Â22% sched_debug.cpu#7.ttwu_local
1383 Â31% -87.7% 170 Â35% sched_debug.cpu#13.ttwu_local
3 Â28% -60.0% 1 Â35% sched_debug.cpu#16.cpu_load[3]
947 Â13% -89.0% 104 Â19% sched_debug.cpu#14.ttwu_local
7606 Â14% +498.2% 45499 Â 1% sched_debug.cpu#25.sched_count
2230 Â11% -84.7% 340 Â12% sched_debug.cpu#4.ttwu_local
616 Â 5% +618.8% 4432 Â37% sched_debug.cfs_rq[2]:/.exec_clock
-321906 Â 0% -8.0% -296005 Â-2% sched_debug.cfs_rq[9]:/.spread0
1487 Â 7% -79.2% 310 Â 4% sched_debug.cpu#6.ttwu_local
3778 Â21% -89.7% 387 Â15% sched_debug.cpu#10.ttwu_local
2539 Â13% -79.4% 522 Â20% sched_debug.cpu#13.ttwu_count
3463 Â14% -58.5% 1436 Â37% sched_debug.cpu#12.ttwu_count
3554 Â48% -80.3% 699 Â38% sched_debug.cpu#15.ttwu_count
2673 Â 6% +88.2% 5031 Â 4% sched_debug.cpu#16.ttwu_count
-320748 Â 0% -8.5% -293620 Â-2% sched_debug.cfs_rq[8]:/.spread0
6636 Â 3% +585.6% 45494 Â 1% sched_debug.cpu#25.nr_switches
6495 Â13% -75.6% 1584 Â 9% sched_debug.cpu#10.ttwu_count
3089 Â12% -73.1% 829 Â10% sched_debug.cpu#5.ttwu_count
2309 Â21% -86.4% 313 Â 5% sched_debug.cpu#1.ttwu_local
3199 Â 4% -77.4% 723 Â22% sched_debug.cpu#7.ttwu_count
4826 Â 6% +42.5% 6875 Â 3% sched_debug.cpu#16.nr_switches
6572 Â 3% +199.1% 19657 Â19% sched_debug.cpu#26.nr_load_updates
4409 Â13% -75.4% 1086 Â 5% sched_debug.cpu#4.ttwu_count
9187 Â33% -59.8% 3693 Â38% sched_debug.cpu#15.nr_switches
4413 Â 4% -9.5% 3994 Â 2% sched_debug.cpu#28.sched_count
941 Â19% -32.8% 632 Â20% sched_debug.cfs_rq[11]:/.max_vruntime
941 Â19% -32.8% 632 Â20% sched_debug.cfs_rq[11]:/.MIN_vruntime
6219 Â19% -75.8% 1502 Â15% sched_debug.cpu#9.sched_count
776 Â15% +107.5% 1610 Â 5% sched_debug.cpu#16.ttwu_local
4564 Â33% -60.3% 1813 Â39% sched_debug.cpu#15.sched_goidle
1128 Â 4% +15.3% 1300 Â 8% sched_debug.cfs_rq[3]:/.exec_clock
1795 Â38% +132.6% 4177 Â49% sched_debug.cpu#26.ttwu_count
5520 Â10% +454.6% 30617 Â 2% sched_debug.cpu#25.ttwu_count
11528 Â37% -52.6% 5467 Â34% sched_debug.cpu#13.sched_count
433 Â37% +385.1% 2100 Â33% sched_debug.cpu#24.nr_switches
445 Â36% +374.1% 2113 Â33% sched_debug.cpu#24.sched_count
3728 Â 6% -48.6% 1917 Â33% sched_debug.cpu#6.ttwu_count
186 Â22% +87.8% 350 Â16% sched_debug.cpu#22.ttwu_count
5998 Â 2% +197.5% 17846 Â17% sched_debug.cpu#30.nr_load_updates
9644 Â 9% -23.3% 7395 Â 4% sched_debug.cfs_rq[16]:/.min_vruntime
2692 Â 5% -58.6% 1115 Â 5% sched_debug.cpu#11.ttwu_local
4838 Â 6% +42.4% 6888 Â 3% sched_debug.cpu#16.sched_count
6623 Â 7% -59.4% 2689 Â45% sched_debug.cpu#14.nr_switches
0.13 Â 3% +268.4% 0.47 Â 2% turbostat.%pc2
2065 Â 6% +18.2% 2441 Â 4% sched_debug.cpu#16.sched_goidle
1167 Â17% +1248.5% 15736 Â 2% sched_debug.cpu#25.ttwu_local
9982 Â38% -51.2% 4866 Â21% sched_debug.cpu#15.sched_count
4218 Â21% -78.8% 892 Â 3% sched_debug.cpu#1.ttwu_count
12688 Â17% -63.2% 4669 Â 8% sched_debug.cpu#10.nr_switches
0 Â 0% +Inf% 1 Â 0% sched_debug.cfs_rq[11]:/.runnable_load_avg
7783 Â 3% -41.9% 4521 Â 6% sched_debug.cpu#6.nr_switches
191 Â33% +437.2% 1029 Â33% sched_debug.cpu#24.sched_goidle
174 Â20% +37.1% 239 Â16% sched_debug.cfs_rq[2]:/.avg->runnable_avg_sum
4400 Â 4% -9.5% 3983 Â 2% sched_debug.cpu#28.nr_switches
3289 Â 7% -60.1% 1312 Â47% sched_debug.cpu#14.sched_goidle
113 Â28% +56.6% 177 Â14% sched_debug.cpu#19.ttwu_local
4013 Â 6% -42.4% 2311 Â 7% sched_debug.cpu#5.sched_goidle
5240 Â 3% -30.9% 3623 Â18% sched_debug.cpu#12.sched_goidle
746 Â23% +33.2% 994 Â20% sched_debug.cpu#22.sched_count
10760 Â 3% -18.3% 8796 Â 4% sched_debug.cpu#3.sched_goidle
7 Â25% +50.0% 11 Â 0% sched_debug.cfs_rq[26]:/.tg_runnable_contrib
353 Â 7% +45.1% 513 Â 3% sched_debug.cpu#21.ttwu_count
369 Â22% +48.7% 548 Â 2% sched_debug.cfs_rq[26]:/.avg->runnable_avg_sum
3879 Â 3% -43.2% 2204 Â 6% sched_debug.cpu#6.sched_goidle
8154 Â 9% -50.3% 4056 Â 7% sched_debug.cpu#13.nr_switches
6518 Â 8% -37.8% 4055 Â10% sched_debug.cpu#11.ttwu_count
12545 Â19% +94.3% 24380 Â17% sched_debug.cpu#0.sched_count
12125 Â 7% -35.9% 7767 Â22% sched_debug.cpu#12.sched_count
874 Â16% +27.0% 1110 Â 2% sched_debug.cpu#19.sched_count
15301 Â23% -55.3% 6839 Â44% sched_debug.cpu#10.sched_count
8071 Â 6% -41.6% 4710 Â 6% sched_debug.cpu#5.nr_switches
6935 Â 1% +164.2% 18321 Â15% sched_debug.cpu#28.nr_load_updates
160.08 Â21% -57.4% 68.24 Â17% sched_debug.cfs_rq[17]:/.exec_clock
4035 Â10% -51.2% 1968 Â 8% sched_debug.cpu#13.sched_goidle
6166 Â 4% -36.1% 3938 Â 1% sched_debug.cpu#4.sched_goidle
12390 Â 4% -35.6% 7979 Â 1% sched_debug.cpu#4.nr_switches
2013 Â13% -40.4% 1199 Â22% sched_debug.cfs_rq[31]:/.exec_clock
21917 Â 3% -23.4% 16789 Â 3% sched_debug.cpu#11.nr_switches
2889 Â11% +38.8% 4009 Â 4% sched_debug.cpu#0.ttwu_local
1770 Â 4% -9.9% 1595 Â 3% sched_debug.cpu#28.sched_goidle
9373 Â 1% +122.2% 20825 Â13% sched_debug.cpu#16.nr_load_updates
21640 Â 3% -17.4% 17875 Â 4% sched_debug.cpu#3.nr_switches
2436 Â27% +59.0% 3874 Â 8% sched_debug.cfs_rq[9]:/.min_vruntime
141 Â42% +78.5% 251 Â31% sched_debug.cfs_rq[21]:/.avg->runnable_avg_sum
5 Â 8% -25.0% 4 Â 0% sched_debug.cfs_rq[10]:/.nr_spread_over
6297 Â 1% +180.9% 17692 Â16% sched_debug.cpu#22.nr_load_updates
23813 Â 5% -26.3% 17540 Â 9% sched_debug.cpu#11.sched_count
7203 Â10% +51.3% 10901 Â 6% sched_debug.cpu#0.nr_switches
6103 Â 0% +184.8% 17380 Â16% sched_debug.cpu#29.nr_load_updates
6345 Â 1% +179.0% 17704 Â16% sched_debug.cpu#21.nr_load_updates
6778 Â13% -38.3% 4182 Â 1% sched_debug.cpu#3.ttwu_count
7185 Â 2% +158.0% 18540 Â14% sched_debug.cpu#20.nr_load_updates
10557 Â 3% -30.0% 7389 Â17% sched_debug.cpu#12.nr_switches
10861 Â 3% -24.2% 8234 Â 3% sched_debug.cpu#11.sched_goidle
6415 Â 1% +185.0% 18284 Â17% sched_debug.cpu#17.nr_load_updates
6453 Â 2% +183.8% 18311 Â10% sched_debug.cpu#31.nr_load_updates
6222 Â 2% +193.1% 18237 Â17% sched_debug.cpu#24.nr_load_updates
92 Â 6% +170.3% 248 Â45% sched_debug.cpu#27.ttwu_local
7028 Â 3% +162.9% 18480 Â15% sched_debug.cpu#18.nr_load_updates
7966 Â 5% -51.4% 3869 Â13% sched_debug.cpu#7.nr_switches
5522 Â12% -30.6% 3833 Â10% sched_debug.cpu#2.sched_goidle
3962 Â 5% -52.1% 1899 Â13% sched_debug.cpu#7.sched_goidle
4947 Â18% -53.5% 2300 Â12% sched_debug.cpu#1.sched_goidle
6343 Â 0% +182.5% 17920 Â15% sched_debug.cpu#19.nr_load_updates
2816 Â10% -50.2% 1402 Â 3% sched_debug.cpu#3.ttwu_local
6187 Â 1% +198.2% 18452 Â16% sched_debug.cpu#27.nr_load_updates
6690 Â 8% +45.0% 9701 Â 4% sched_debug.cpu#0.ttwu_count
6420 Â 1% +183.9% 18227 Â18% sched_debug.cpu#23.nr_load_updates
341 Â24% +34.7% 460 Â20% sched_debug.cpu#22.sched_goidle
10009 Â17% -53.0% 4701 Â12% sched_debug.cpu#1.nr_switches
862 Â16% +27.3% 1097 Â 2% sched_debug.cpu#19.nr_switches
3816 Â21% +58.3% 6040 Â18% sched_debug.cfs_rq[26]:/.min_vruntime
3259 Â22% -37.6% 2033 Â10% sched_debug.cfs_rq[21]:/.min_vruntime
7691 Â16% +137.7% 18283 Â15% sched_debug.cpu#9.nr_load_updates
2017 Â14% +53.7% 3101 Â 2% sched_debug.cpu#0.sched_goidle
9970 Â 0% +94.2% 19358 Â13% sched_debug.cpu#7.nr_load_updates
3040 Â15% -46.5% 1626 Â12% sched_debug.cfs_rq[17]:/.min_vruntime
404 Â13% +118.1% 881 Â21% sched_debug.cpu#19.ttwu_count
735 Â23% +33.6% 982 Â20% sched_debug.cpu#22.nr_switches
9707 Â 1% +102.0% 19613 Â14% sched_debug.cpu#6.nr_load_updates
7028 Â 7% +138.5% 16759 Â15% sched_debug.cpu#1.nr_load_updates
11006 Â 2% +83.2% 20160 Â13% sched_debug.cpu#4.nr_load_updates
9640 Â 3% +101.9% 19466 Â14% sched_debug.cpu#5.nr_load_updates
10343 Â14% +86.3% 19265 Â18% sched_debug.cpu#15.nr_load_updates
8734 Â 1% +115.3% 18805 Â14% sched_debug.cpu#14.nr_load_updates
9591 Â 4% +100.4% 19225 Â12% sched_debug.cpu#13.nr_load_updates
154763 Â 0% -8.9% 141064 Â 2% sched_debug.cfs_rq[0]:/.exec_clock
12025 Â 6% +73.6% 20879 Â12% sched_debug.cpu#11.nr_load_updates
10341 Â 4% +102.2% 20906 Â14% sched_debug.cpu#12.nr_load_updates
934 Â 1% -32.0% 635 Â10% cpuidle.C1E-SNB.usage
10712 Â10% +104.0% 21852 Â20% sched_debug.cpu#8.nr_load_updates
11875 Â 3% +80.8% 21468 Â13% sched_debug.cpu#3.nr_load_updates
465 Â 7% -35.5% 300 Â33% sched_debug.cfs_rq[7]:/.exec_clock
645 Â14% +40.0% 903 Â15% slabinfo.blkdev_requests.active_objs
645 Â14% +40.0% 903 Â15% slabinfo.blkdev_requests.num_objs
956772 Â 5% -10.1% 859962 Â 5% sched_debug.cpu#25.avg_idle
10958 Â 3% +79.7% 19689 Â 4% sched_debug.cpu#2.nr_load_updates
122784 Â 0% -27.0% 89589 Â 3% cpuidle.C7-SNB.usage
672 Â13% +35.4% 910 Â13% slabinfo.xfs_buf.num_objs
672 Â13% +35.4% 910 Â13% slabinfo.xfs_buf.active_objs
4671 Â10% -18.2% 3821 Â 6% cpuidle.C1-SNB.usage
37764 Â 2% +60.0% 60411 Â 3% softirqs.SCHED
11061 Â 5% +11.5% 12334 Â 4% slabinfo.kmalloc-256.active_objs
5821781 Â 8% -18.0% 4773888 Â10% meminfo.DirectMap2M
384047 Â 0% +12.2% 430996 Â 1% softirqs.TIMER
1445 Â 1% -12.4% 1266 Â 5% slabinfo.sock_inode_cache.active_objs
1445 Â 1% -12.4% 1266 Â 5% slabinfo.sock_inode_cache.num_objs
364 Â20% +9569.9% 35198 Â 7% time.involuntary_context_switches
2331 Â 0% -3.5% 2250 Â 0% vmstat.system.in

testbox/testcase/testparams: lkp-a04/netperf/300s-200%-TCP_RR

71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
549 Â 6% -11.5% 486 Â 2% sched_debug.cfs_rq[0]:/.load
244 Â22% +68.3% 411 Â22% sched_debug.cfs_rq[3]:/.blocked_load_avg
705 Â 7% +20.1% 846 Â10% sched_debug.cfs_rq[3]:/.tg_load_contrib
836 Â 8% +7.8% 901 Â 7% sched_debug.cfs_rq[2]:/.tg_load_contrib
3057027 Â 0% +9.5% 3346840 Â 1% sched_debug.cpu#3.nr_switches
909 Â 9% -15.6% 767 Â 1% sched_debug.cfs_rq[0]:/.tg_load_contrib
3113873 Â 3% +8.1% 3365443 Â 0% sched_debug.cpu#3.sched_count
3055925 Â 0% +10.9% 3389777 Â 2% sched_debug.cpu#2.nr_switches
21 Â15% -25.4% 15 Â15% sched_debug.cfs_rq[3]:/.nr_spread_over
554 Â 1% -12.8% 483 Â 3% sched_debug.cpu#1.load
14437 Â45% +83.7% 26517 Â12% cpuidle.C2-ATM.time
27 Â21% +61.7% 43 Â20% cpuidle.C2-ATM.usage
232546 Â 9% -14.0% 200000 Â12% sched_debug.cfs_rq[3]:/.MIN_vruntime
232546 Â 9% -14.0% 200000 Â12% sched_debug.cfs_rq[3]:/.max_vruntime
13 Â 9% -29.3% 9 Â 4% sched_debug.cfs_rq[2]:/.nr_spread_over
0.93 Â 6% -13.6% 0.81 Â 4% perf-profile.cpu-cycles.try_to_wake_up.default_wake_function.autoremove_wake_function.__wake_up_common.__wake_up_sync_key
17732 Â 1% -16.7% 14766 Â 4% softirqs.SCHED
128 Â 3% +5.2% 135 Â 4% uptime.idle
8424 Â 6% -10.2% 7561 Â 3% slabinfo.kmalloc-64.num_objs
1.49 Â 4% -8.3% 1.37 Â 2% perf-profile.cpu-cycles.tcp_recvmsg.inet_recvmsg.sock_recvmsg.SYSC_recvfrom.sys_recvfrom

testbox/testcase/testparams: lkp-nex04/ebizzy/200%-100x-10s

71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
83 Â 0% -2.0% 81 Â 0% ebizzy.throughput.per_thread.min
12681 Â 0% -1.4% 12503 Â 0% ebizzy.throughput
41.83 Â 0% +1.2% 42.33 Â 0% ebizzy.time.user
115275 Â39% -255.1% -178796 Â-14% sched_debug.cfs_rq[1]:/.spread0
-20 Â-20% +109.7% -43 Â-16% sched_debug.cpu#31.nr_uninterruptible
-12 Â-45% +171.1% -34 Â-36% sched_debug.cpu#15.nr_uninterruptible
413 Â41% +59.7% 660 Â13% sched_debug.cfs_rq[19]:/.blocked_load_avg
207474 Â34% +204.1% 630866 Â32% sched_debug.cpu#40.sched_count
56 Â18% +112.5% 119 Â13% sched_debug.cpu#49.nr_uninterruptible
17 Â15% +38.5% 24 Â12% sched_debug.cpu#14.cpu_load[1]
149203 Â21% +181.6% 420151 Â24% sched_debug.cpu#54.sched_count
37 Â23% -45.9% 20 Â44% sched_debug.cpu#4.load
252107 Â40% +120.5% 556018 Â34% sched_debug.cpu#23.sched_count
210263 Â34% +184.3% 597868 Â44% sched_debug.cpu#21.sched_count
990810 Â12% +18.7% 1175867 Â10% sched_debug.cfs_rq[39]:/.spread0
401 Â38% +50.2% 602 Â 8% sched_debug.cfs_rq[25]:/.blocked_load_avg
18 Â11% +29.1% 23 Â 7% sched_debug.cpu#14.cpu_load[2]
17 Â 7% +20.8% 21 Â 5% sched_debug.cpu#14.cpu_load[3]
415 Â37% +47.6% 613 Â 8% sched_debug.cfs_rq[25]:/.tg_load_contrib
25 Â 4% -28.6% 18 Â26% numa-numastat.node0.other_node
426 Â40% +57.4% 671 Â13% sched_debug.cfs_rq[19]:/.tg_load_contrib
457 Â44% +50.3% 687 Â 4% sched_debug.cfs_rq[10]:/.blocked_load_avg
12 Â26% +127.0% 28 Â35% sched_debug.cfs_rq[58]:/.load
12 Â26% +154.1% 31 Â44% sched_debug.cpu#58.load
471 Â43% +48.6% 700 Â 4% sched_debug.cfs_rq[10]:/.tg_load_contrib
198896 Â40% +164.2% 525570 Â49% sched_debug.cpu#24.sched_count
23 Â12% -34.8% 15 Â23% sched_debug.cfs_rq[26]:/.load
2373 Â 0% -14.8% 2021 Â 2% sched_debug.cpu#21.sched_goidle
367 Â45% +56.2% 574 Â16% sched_debug.cfs_rq[2]:/.blocked_load_avg
17 Â10% +17.3% 20 Â12% sched_debug.cpu#27.cpu_load[2]
2371 Â 2% -12.9% 2064 Â 1% sched_debug.cpu#16.sched_goidle
17 Â 9% +17.0% 20 Â 9% sched_debug.cpu#27.cpu_load[3]
14 Â 5% -19.0% 11 Â11% sched_debug.cfs_rq[11]:/.runnable_load_avg
18 Â16% -46.3% 9 Â 9% sched_debug.cpu#26.cpu_load[0]
402 Â44% +63.3% 657 Â11% sched_debug.cfs_rq[7]:/.tg_load_contrib
262964 Â46% +78.4% 469186 Â44% sched_debug.cpu#47.sched_count
39 Â 8% -31.4% 27 Â29% sched_debug.cfs_rq[2]:/.load
12 Â26% +102.7% 25 Â37% sched_debug.cpu#58.cpu_load[0]
12 Â26% +81.1% 22 Â29% sched_debug.cfs_rq[58]:/.runnable_load_avg
440 Â45% +58.7% 698 Â16% sched_debug.cfs_rq[20]:/.blocked_load_avg
2555 Â 2% -18.0% 2096 Â 4% sched_debug.cpu#14.sched_goidle
6 Â25% -50.0% 3 Â28% cpuidle.POLL.usage
16 Â10% +28.0% 21 Â 2% sched_debug.cpu#1.cpu_load[3]
186638 Â30% +122.1% 414470 Â43% sched_debug.cpu#53.sched_count
391 Â44% +51.4% 592 Â15% sched_debug.cfs_rq[2]:/.tg_load_contrib
27 Â47% +96.3% 53 Â38% sched_debug.cpu#41.cpu_load[0]
14 Â31% +84.1% 27 Â20% sched_debug.cpu#60.load
21 Â30% -52.3% 10 Â12% sched_debug.cpu#21.cpu_load[0]
390 Â46% +65.6% 645 Â11% sched_debug.cfs_rq[7]:/.blocked_load_avg
66283 Â 3% +8.5% 71886 Â 2% sched_debug.cpu#11.ttwu_count
27 Â47% +85.4% 50 Â35% sched_debug.cfs_rq[41]:/.runnable_load_avg
389 Â43% +64.2% 640 Â 8% sched_debug.cfs_rq[29]:/.blocked_load_avg
14 Â22% -34.1% 9 Â 9% sched_debug.cfs_rq[26]:/.runnable_load_avg
402 Â41% +62.3% 653 Â 7% sched_debug.cfs_rq[29]:/.tg_load_contrib
382891 Â41% +64.4% 629385 Â23% sched_debug.cpu#10.sched_count
16 Â 5% +20.8% 19 Â 2% sched_debug.cpu#1.cpu_load[4]
3299 Â 9% -27.3% 2398 Â 3% sched_debug.cpu#4.sched_goidle
21 Â34% -45.3% 11 Â38% sched_debug.cpu#53.load
135693 Â 3% +7.3% 145636 Â 2% sched_debug.cpu#11.nr_switches
20 Â 7% -30.0% 14 Â 5% sched_debug.cpu#21.cpu_load[1]
17 Â16% +67.3% 29 Â23% sched_debug.cpu#28.load
15 Â26% +48.9% 23 Â22% sched_debug.cpu#9.load
454 Â44% +56.6% 711 Â15% sched_debug.cfs_rq[20]:/.tg_load_contrib
426 Â47% +56.8% 669 Â10% sched_debug.cfs_rq[21]:/.blocked_load_avg
12 Â10% -21.6% 9 Â 4% sched_debug.cfs_rq[19]:/.runnable_load_avg
19 Â 4% -17.5% 15 Â 6% sched_debug.cpu#21.cpu_load[2]
442 Â46% +54.6% 683 Â10% sched_debug.cfs_rq[21]:/.tg_load_contrib
16 Â 2% -8.0% 15 Â 3% sched_debug.cpu#21.cpu_load[4]
13 Â22% +89.7% 24 Â34% sched_debug.cpu#58.cpu_load[1]
17 Â 2% -9.4% 16 Â 5% sched_debug.cpu#21.cpu_load[3]
4281 Â32% -36.6% 2713 Â 5% sched_debug.cpu#11.sched_goidle
145077 Â20% +90.1% 275727 Â41% sched_debug.cpu#52.sched_count
28 Â44% +75.0% 49 Â32% sched_debug.cpu#41.cpu_load[1]
11 Â 8% +114.3% 25 Â25% sched_debug.cfs_rq[59]:/.load
13 Â 6% +170.7% 37 Â45% sched_debug.cpu#3.cpu_load[0]
3270 Â18% -24.3% 2474 Â 6% sched_debug.cpu#1.sched_goidle
12 Â 7% +71.1% 21 Â 4% sched_debug.cfs_rq[1]:/.runnable_load_avg
22 Â31% +52.2% 34 Â24% sched_debug.cpu#10.load
12 Â 7% +100.0% 25 Â 6% sched_debug.cfs_rq[1]:/.load
160297 Â30% +44.2% 231073 Â24% sched_debug.cpu#35.sched_count
2223 Â 0% -12.5% 1946 Â 2% sched_debug.cpu#17.sched_goidle
18 Â11% +80.4% 33 Â21% sched_debug.cfs_rq[18]:/.load
2379 Â 8% -12.3% 2085 Â 7% sched_debug.cpu#22.sched_goidle
42 Â 6% -35.2% 27 Â27% sched_debug.cfs_rq[20]:/.load
20 Â34% -43.5% 11 Â26% sched_debug.cfs_rq[54]:/.load
20 Â34% -43.5% 11 Â26% sched_debug.cpu#54.load
202074 Â26% +85.4% 374733 Â40% sched_debug.cpu#49.sched_count
24 Â23% -49.3% 12 Â21% sched_debug.cpu#6.load
13 Â18% +132.5% 31 Â18% sched_debug.cpu#43.load
203456 Â24% +157.7% 524260 Â41% sched_debug.cpu#20.sched_count
200838 Â21% +129.5% 460986 Â18% sched_debug.cpu#18.sched_count
400 Â45% +49.9% 600 Â11% sched_debug.cfs_rq[26]:/.blocked_load_avg
415 Â44% +46.7% 609 Â10% sched_debug.cfs_rq[26]:/.tg_load_contrib
368 Â46% +78.1% 656 Â14% sched_debug.cfs_rq[22]:/.blocked_load_avg
12 Â 7% -23.7% 9 Â 9% sched_debug.cfs_rq[16]:/.runnable_load_avg
380 Â44% +75.2% 666 Â13% sched_debug.cfs_rq[22]:/.tg_load_contrib
16 Â15% -44.0% 9 Â 5% sched_debug.cpu#45.cpu_load[0]
16 Â15% -44.0% 9 Â13% sched_debug.cfs_rq[45]:/.runnable_load_avg
28 Â17% +89.5% 54 Â41% sched_debug.cpu#45.nr_uninterruptible
15 Â30% -40.0% 9 Â 9% sched_debug.cfs_rq[37]:/.runnable_load_avg
208 Â 0% -28.2% 149 Â10% proc-vmstat.nr_dirtied
17 Â19% -42.3% 10 Â 8% sched_debug.cpu#45.cpu_load[1]
14 Â28% +61.4% 23 Â28% sched_debug.cfs_rq[48]:/.load
20 Â27% +62.9% 33 Â28% sched_debug.cpu#62.load
123063 Â 5% +82.2% 224204 Â45% sched_debug.cpu#60.sched_count
16 Â23% +96.0% 32 Â41% sched_debug.cpu#6.cpu_load[0]
15 Â16% -26.1% 11 Â11% sched_debug.cpu#38.cpu_load[1]
432 Â44% +60.5% 694 Â 8% sched_debug.cfs_rq[23]:/.blocked_load_avg
252685 Â38% +40.9% 356018 Â10% sched_debug.cpu#55.sched_count
445 Â42% +58.1% 704 Â 8% sched_debug.cfs_rq[23]:/.tg_load_contrib
13 Â13% +73.2% 23 Â10% sched_debug.cfs_rq[3]:/.load
18 Â 6% +65.5% 30 Â24% sched_debug.cpu#3.cpu_load[1]
13 Â15% +75.6% 24 Â10% sched_debug.cfs_rq[31]:/.load
17 Â17% +60.8% 27 Â10% sched_debug.cpu#31.load
2674 Â14% +31.3% 3512 Â 9% cpuidle.C1-NHM.usage
18 Â20% -32.7% 12 Â10% sched_debug.cpu#45.cpu_load[2]
19 Â10% +29.3% 25 Â13% sched_debug.cpu#3.cpu_load[2]
12 Â16% -21.6% 9 Â 9% sched_debug.cpu#61.cpu_load[0]
12 Â16% -21.6% 9 Â 9% sched_debug.cpu#61.cpu_load[1]
371 Â46% +55.2% 575 Â12% sched_debug.cfs_rq[39]:/.blocked_load_avg
385 Â44% +52.9% 589 Â13% sched_debug.cfs_rq[39]:/.tg_load_contrib
18 Â17% -25.0% 14 Â10% sched_debug.cpu#45.cpu_load[3]
14 Â 3% +23.3% 17 Â10% sched_debug.cpu#60.cpu_load[3]
13 Â10% +38.5% 18 Â25% sched_debug.cpu#31.cpu_load[1]
63 Â37% +68.9% 107 Â22% sched_debug.cfs_rq[49]:/.nr_spread_over
365 Â46% +63.5% 596 Â14% sched_debug.cfs_rq[62]:/.blocked_load_avg
27 Â33% -38.6% 17 Â14% sched_debug.cfs_rq[30]:/.load
378 Â44% +61.0% 609 Â14% sched_debug.cfs_rq[62]:/.tg_load_contrib
42 Â22% +56.3% 66 Â24% sched_debug.cfs_rq[57]:/.load
14 Â11% +15.9% 17 Â 8% sched_debug.cpu#47.cpu_load[2]
1881 Â10% -14.2% 1614 Â 6% sched_debug.cpu#39.sched_goidle
12305 Â 5% -9.9% 11081 Â 3% numa-meminfo.node0.SReclaimable
3075 Â 5% -9.9% 2770 Â 3% numa-vmstat.node0.nr_slab_reclaimable
24141 Â 2% +10.0% 26545 Â 3% numa-meminfo.node1.SUnreclaim
11 Â11% -17.6% 9 Â 5% sched_debug.cpu#44.cpu_load[0]
11 Â11% -17.6% 9 Â 5% sched_debug.cfs_rq[44]:/.runnable_load_avg
12214 Â 1% -8.9% 11131 Â 3% slabinfo.kmalloc-256.active_objs
15 Â 6% +28.3% 19 Â12% sched_debug.cpu#31.cpu_load[2]
16 Â 5% +18.0% 19 Â 6% sched_debug.cpu#31.cpu_load[3]
15 Â 3% +21.7% 18 Â13% sched_debug.cpu#58.cpu_load[4]
14 Â 3% +18.2% 17 Â 5% sched_debug.cpu#60.cpu_load[4]
639 Â 5% -8.9% 582 Â 1% numa-vmstat.node3.nr_mapped
2558 Â 5% -8.9% 2331 Â 1% numa-meminfo.node3.Mapped
16791 Â14% -19.0% 13605 Â10% sched_debug.cpu#49.curr->pid
39715 Â 3% -11.4% 35185 Â 3% numa-meminfo.node0.Slab
3039 Â 5% +15.2% 3500 Â 6% sched_debug.cpu#57.sched_goidle
16 Â 2% +10.2% 18 Â 4% sched_debug.cpu#31.cpu_load[4]
14 Â 3% +11.6% 16 Â 5% sched_debug.cpu#59.cpu_load[3]
2350 Â22% -22.7% 1817 Â 5% sched_debug.cpu#42.sched_goidle
407 Â45% +50.0% 610 Â 9% sched_debug.cfs_rq[56]:/.tg_load_contrib
2567 Â 4% -8.4% 2352 Â 0% numa-meminfo.node0.Mapped
641 Â 4% -8.4% 587 Â 0% numa-vmstat.node0.nr_mapped
393 Â47% +51.7% 596 Â 9% sched_debug.cfs_rq[56]:/.blocked_load_avg
2549 Â 4% -8.1% 2342 Â 0% proc-vmstat.nr_mapped
10198 Â 4% -8.1% 9369 Â 0% meminfo.Mapped
636 Â 5% -7.8% 587 Â 0% numa-vmstat.node2.nr_mapped
2547 Â 5% -7.8% 2349 Â 0% numa-meminfo.node2.Mapped
631 Â 4% -7.5% 583 Â 0% numa-vmstat.node1.nr_mapped
2525 Â 4% -7.5% 2336 Â 0% numa-meminfo.node1.Mapped
1.219e+09 Â 0% -1.4% 1.202e+09 Â 0% time.minor_page_faults
793360 Â 0% -1.6% 780981 Â 0% vmstat.system.in
4198 Â 0% +1.2% 4247 Â 0% time.user_time

testbox/testcase/testparams: lkp-a04/netperf/300s-200%-TCP_CRR

71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
915757 Â 2% -12.1% 804961 Â11% sched_debug.cpu#1.ttwu_local
707963 Â 1% +17.3% 830122 Â 8% sched_debug.cpu#2.ttwu_local
952710 Â 2% -13.4% 825293 Â11% sched_debug.cpu#1.ttwu_count
740153 Â 0% +16.2% 860125 Â 7% sched_debug.cpu#2.ttwu_count
970 Â 6% -10.9% 864 Â 7% sched_debug.cfs_rq[2]:/.tg_load_contrib
411 Â10% +17.2% 482 Â 3% sched_debug.cpu#2.load
1048586 Â 3% +32.5% 1389065 Â 7% sched_debug.cpu#2.sched_count
1027759 Â 4% +26.3% 1298423 Â 7% sched_debug.cpu#3.sched_count
1028957 Â 1% +14.7% 1180449 Â 9% sched_debug.cpu#2.nr_switches
1321054 Â 2% -13.9% 1137448 Â11% sched_debug.cpu#1.nr_switches
80734 Â11% -44.8% 44553 Â 9% meminfo.DirectMap4k
70255 Â27% +66.1% 116714 Â14% sched_debug.cfs_rq[3]:/.MIN_vruntime
70255 Â27% +66.1% 116714 Â14% sched_debug.cfs_rq[3]:/.max_vruntime
367 Â 6% +16.7% 428 Â 4% sched_debug.cfs_rq[3]:/.load
13 Â16% -35.9% 8 Â11% sched_debug.cfs_rq[2]:/.nr_spread_over
18981 Â 2% -17.7% 15623 Â 3% softirqs.SCHED
3707 Â 4% -13.8% 3197 Â 6% slabinfo.anon_vma.active_objs
3707 Â 4% -13.8% 3197 Â 6% slabinfo.anon_vma.num_objs
6203 Â 0% -4.5% 5925 Â 0% vmstat.system.in

testbox/testcase/testparams: lkp-sb03/thrulay/300s

71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
47828 Â 1% -5.5% 45180 Â 0% thrulay.throughput
0.10 Â 1% +5.9% 0.10 Â 0% thrulay.RTT
-59302 Â-17% -50.1% -29575 Â-35% sched_debug.cfs_rq[24]:/.spread0
-60291 Â-15% -50.7% -29694 Â-31% sched_debug.cfs_rq[21]:/.spread0
-60819 Â-14% -47.2% -32107 Â-24% sched_debug.cfs_rq[30]:/.spread0
-54873 Â-16% -45.3% -30027 Â-34% sched_debug.cfs_rq[17]:/.spread0
-59850 Â-17% -47.2% -31624 Â-30% sched_debug.cfs_rq[23]:/.spread0
-55261 Â-20% -45.9% -29906 Â-24% sched_debug.cfs_rq[18]:/.spread0
-41785 Â-16% -41.5% -24458 Â-43% sched_debug.cfs_rq[25]:/.spread0
-57365 Â-17% -47.7% -30005 Â-32% sched_debug.cfs_rq[29]:/.spread0
-60492 Â-14% -51.2% -29536 Â-30% sched_debug.cfs_rq[31]:/.spread0
-61193 Â-14% -47.0% -32461 Â-27% sched_debug.cfs_rq[22]:/.spread0
-54660 Â-11% -62.3% -20582 Â-29% sched_debug.cfs_rq[5]:/.spread0
-48402 Â-25% -64.3% -17293 Â-9% sched_debug.cfs_rq[3]:/.spread0
-60044 Â-15% -48.8% -30734 Â-28% sched_debug.cfs_rq[19]:/.spread0
-47533 Â-5% -69.0% -14740 Â-22% sched_debug.cfs_rq[2]:/.spread0
-55735 Â-19% -47.5% -29282 Â-35% sched_debug.cfs_rq[20]:/.spread0
-55307 Â-22% -54.9% -24958 Â-2% sched_debug.cfs_rq[6]:/.spread0
-56973 Â-18% -47.7% -29790 Â-28% sched_debug.cfs_rq[26]:/.spread0
-62112 Â-15% -54.6% -28195 Â-27% sched_debug.cfs_rq[16]:/.spread0
-59783 Â-15% -55.3% -26706 Â-44% sched_debug.cfs_rq[27]:/.spread0
-50405 Â-2% -62.5% -18877 Â-24% sched_debug.cfs_rq[4]:/.spread0
-56951 Â-14% -56.5% -24767 Â-7% sched_debug.cfs_rq[7]:/.spread0
-58540 Â-16% -46.1% -31550 Â-28% sched_debug.cfs_rq[28]:/.spread0
0 Â 0% +Inf% 9 Â41% sched_debug.cpu#16.cpu_load[4]
2027 Â14% -47.4% 1067 Â10% sched_debug.cpu#1.ttwu_local
278405 Â41% +46.1% 406808 Â 9% cpuidle.C6-SNB.time
51182.86 Â25% -50.9% 25142.66 Â24% sched_debug.cfs_rq[9]:/.exec_clock
4 Â 0% +758.3% 34 Â35% sched_debug.cfs_rq[27]:/.tg_runnable_contrib
215 Â 1% +643.7% 1604 Â33% sched_debug.cfs_rq[27]:/.avg->runnable_avg_sum
1889 Â 6% -44.9% 1041 Â34% sched_debug.cpu#2.ttwu_local
12 Â23% +219.4% 38 Â37% numa-numastat.node0.other_node
969 Â30% +156.7% 2489 Â43% sched_debug.cfs_rq[29]:/.exec_clock
1777 Â20% +306.5% 7227 Â45% sched_debug.cfs_rq[16]:/.min_vruntime
2059 Â 5% -20.0% 1647 Â15% sched_debug.cpu#11.ttwu_local
148 Â11% +35.1% 200 Â19% sched_debug.cpu#21.ttwu_local
1206 Â 6% +72.1% 2077 Â11% sched_debug.cpu#25.ttwu_local
601215 Â12% -54.4% 274002 Â 7% sched_debug.cpu#1.ttwu_count
55669 Â23% -43.1% 31676 Â19% sched_debug.cfs_rq[8]:/.exec_clock
58901 Â22% -34.9% 38328 Â27% sched_debug.cfs_rq[8]:/.min_vruntime
693 Â24% +122.2% 1541 Â49% sched_debug.cfs_rq[21]:/.exec_clock
388 Â37% +141.9% 938 Â18% sched_debug.cpu#10.curr->pid
22104 Â48% -50.4% 10965 Â22% sched_debug.cfs_rq[25]:/.min_vruntime
56292 Â 2% -8.1% 51711 Â 6% numa-meminfo.node1.Active
122 Â 9% +127.6% 277 Â22% sched_debug.cpu#19.ttwu_local
330 Â 8% -52.9% 155 Â40% sched_debug.cfs_rq[18]:/.tg_load_contrib
305 Â 6% -57.4% 130 Â48% sched_debug.cpu#1.cpu_load[3]
94 Â24% +61.0% 151 Â14% sched_debug.cpu#22.ttwu_local
312 Â12% -50.3% 155 Â40% sched_debug.cfs_rq[18]:/.blocked_load_avg
1092563 Â12% -41.6% 637743 Â23% sched_debug.cpu#0.sched_count
186 Â 5% +89.1% 351 Â30% sched_debug.cpu#26.ttwu_local
334 Â 7% -57.4% 142 Â43% sched_debug.cpu#1.cpu_load[4]
13942 Â 8% -28.0% 10036 Â24% numa-meminfo.node1.Active(anon)
8994 Â24% -56.4% 3922 Â 4% sched_debug.cpu#0.ttwu_local
3845 Â11% +21.9% 4688 Â 7% sched_debug.cfs_rq[19]:/.min_vruntime
53930 Â13% -59.1% 22080 Â16% sched_debug.cfs_rq[1]:/.exec_clock
56331 Â27% -45.3% 30809 Â30% sched_debug.cfs_rq[9]:/.min_vruntime
380 Â 8% -64.1% 136 Â45% sched_debug.cfs_rq[1]:/.tg_runnable_contrib
1027661 Â11% -40.8% 608510 Â26% sched_debug.cpu#0.nr_switches
17471 Â 8% -64.1% 6277 Â45% sched_debug.cfs_rq[1]:/.avg->runnable_avg_sum
3486 Â 8% -28.0% 2510 Â24% numa-vmstat.node1.nr_active_anon
72 Â27% +94.0% 139 Â 8% sched_debug.cpu#10.load
1958 Â 9% -25.1% 1466 Â 7% sched_debug.cpu#3.ttwu_local
4107 Â15% +112.2% 8717 Â34% sched_debug.cfs_rq[27]:/.min_vruntime
157 Â40% +77.0% 279 Â20% sched_debug.cpu#29.ttwu_local
542656 Â 5% +45.1% 787416 Â11% sched_debug.cpu#1.avg_idle
3598 Â 5% +59.2% 5728 Â19% sched_debug.cfs_rq[21]:/.min_vruntime
511646 Â11% -41.0% 301664 Â26% sched_debug.cpu#0.sched_goidle
3398 Â14% +73.2% 5886 Â13% sched_debug.cfs_rq[31]:/.min_vruntime
200 Â 4% +34.7% 270 Â27% sched_debug.cpu#18.ttwu_local
59922 Â13% -50.7% 29522 Â19% sched_debug.cfs_rq[1]:/.min_vruntime
993453 Â 0% -11.1% 882755 Â10% sched_debug.cpu#6.avg_idle
1.757e+08 Â21% +115.7% 3.79e+08 Â11% cpuidle.C1-SNB.time
60986 Â 9% -42.8% 34903 Â27% sched_debug.cpu#0.nr_load_updates
39546 Â 2% +15.5% 45657 Â 3% softirqs.RCU
55041 Â12% -46.7% 29348 Â34% sched_debug.cfs_rq[0]:/.exec_clock
63888 Â14% -44.6% 35422 Â25% sched_debug.cfs_rq[0]:/.min_vruntime
13748 Â 3% -28.2% 9867 Â28% numa-meminfo.node1.AnonPages
3195 Â 3% +55.3% 4960 Â 6% cpuidle.C1E-SNB.usage
341 Â 8% +37.5% 469 Â 6% slabinfo.kmem_cache.active_objs
341 Â 8% +37.5% 469 Â 6% slabinfo.kmem_cache.num_objs
996850 Â 0% -16.5% 832771 Â10% sched_debug.cpu#15.avg_idle
3441 Â 4% -28.3% 2469 Â28% numa-vmstat.node1.nr_anon_pages
83332 Â 8% +20.7% 100608 Â 2% sched_debug.cpu#2.nr_load_updates
862076 Â 6% -12.5% 754527 Â 7% sched_debug.cpu#10.avg_idle
319014 Â 1% -13.3% 276514 Â 0% cpuidle.C7-SNB.usage
11.66 Â 8% +28.0% 14.93 Â 6% turbostat.%c1
453 Â 7% +35.0% 612 Â 9% slabinfo.dnotify_mark.active_objs
453 Â 7% +35.0% 612 Â 9% slabinfo.dnotify_mark.num_objs
450 Â 6% +23.7% 557 Â 0% slabinfo.kmem_cache_node.active_objs
469 Â 6% +22.7% 576 Â 0% slabinfo.kmem_cache_node.num_objs
443 Â 6% -14.8% 377 Â 6% numa-vmstat.node1.nr_page_table_pages
139086 Â 1% +15.0% 160011 Â 1% softirqs.SCHED
11164 Â 2% +15.2% 12864 Â10% slabinfo.kmalloc-256.active_objs
1772 Â 6% -15.0% 1506 Â 6% numa-meminfo.node1.PageTables
11412 Â 1% +16.0% 13237 Â 8% slabinfo.kmalloc-256.num_objs
989992 Â 1% -14.4% 846954 Â 4% sched_debug.cpu#14.avg_idle
386658 Â 2% +9.5% 423490 Â 1% softirqs.TIMER
418 Â 4% +4948.2% 21101 Â 1% time.involuntary_context_switches
6.42 Â 0% +1.9% 6.54 Â 0% turbostat.%c0
29456 Â 7% +40.8% 41487 Â 5% vmstat.system.cs
8227 Â10% +34.8% 11087 Â 3% vmstat.system.in
0.00 Â 1% +4.4% 0.00 Â 0% energy.energy-cores
43.93 Â 1% +4.4% 45.86 Â 0% turbostat.Cor_W
71.43 Â 0% +2.7% 73.35 Â 0% turbostat.Pkg_W
0.00 Â 0% +2.7% 0.00 Â 0% energy.energy-pkg

testbox/testcase/testparams: lkp-a04/netperf/300s-200%-UDP_RR

71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
17781 Â22% -39.7% 10718 Â28% sched_debug.cpu#3.sched_goidle
426 Â 6% +8.4% 461 Â 4% sched_debug.cpu#0.cpu_load[1]
410 Â 8% +11.6% 458 Â 3% sched_debug.cpu#0.cpu_load[0]
12459 Â25% +26.1% 15711 Â 8% sched_debug.cpu#1.sched_goidle
565 Â 5% +15.9% 655 Â10% slabinfo.kmalloc-512.active_objs
27181 Â14% -37.6% 16948 Â43% cpuidle.C4-ATM.time
18120 Â 1% -16.4% 15156 Â 1% softirqs.SCHED
882 Â 6% +14.3% 1008 Â 0% slabinfo.kmalloc-96.active_objs
882 Â 6% +14.3% 1008 Â 0% slabinfo.kmalloc-96.num_objs
0.99 Â 5% -13.4% 0.86 Â 6% perf-profile.cpu-cycles.ip_rcv.__netif_receive_skb_core.__netif_receive_skb.process_backlog.net_rx_action
1.00 Â 4% -7.3% 0.93 Â 2% perf-profile.cpu-cycles.__skb_recv_datagram.udp_recvmsg.inet_recvmsg.sock_recvmsg.SYSC_recvfrom
1.24 Â 1% -10.5% 1.11 Â 4% perf-profile.cpu-cycles.recv_omni.process_requests.spawn_child.accept_connection.accept_connections

testbox/testcase/testparams: lkp-snb01/hackbench/1600%-threads-pipe

71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
0 +Inf% 0 Â141% last_state.is_incomplete_run
0 +Inf% 0 Â141% last_state.booting

testbox/testcase/testparams: lkp-snb01/hackbench/1600%-threads-socket

71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
144955 Â 0% +3.5% 149992 Â 0% hackbench.throughput
469607 Â36% +127.0% 1065815 Â37% sched_debug.cfs_rq[6]:/.spread0
43 Â27% +138.5% 103 Â25% sched_debug.cpu#30.nr_uninterruptible
46 Â22% +122.3% 103 Â24% sched_debug.cpu#20.nr_uninterruptible
8591 Â26% +95.9% 16831 Â48% numa-meminfo.node1.Shmem
1350481 Â37% +50.8% 2036091 Â16% sched_debug.cpu#6.ttwu_local
1369653 Â34% +52.0% 2081643 Â18% sched_debug.cpu#10.ttwu_local
1591125 Â39% +53.3% 2438849 Â18% sched_debug.cpu#13.ttwu_count
40 Â41% +140.0% 96 Â39% numa-vmstat.node0.nr_inactive_anon
29 Â20% +48.3% 44 Â26% sched_debug.cfs_rq[12]:/.load
1692456 Â41% +61.6% 2735455 Â17% sched_debug.cpu#6.sched_count
2398 Â24% +69.8% 4071 Â27% sched_debug.cfs_rq[31]:/.tg_load_contrib
13 Â46% +241.5% 46 Â43% sched_debug.cpu#5.nr_uninterruptible
1633856 Â35% +47.0% 2400982 Â19% sched_debug.cpu#10.ttwu_count
1412831 Â37% +59.2% 2249726 Â17% sched_debug.cpu#7.ttwu_count
1639820 Â37% +53.6% 2518680 Â17% sched_debug.cpu#30.ttwu_count
2284 Â26% +95.8% 4471 Â23% sched_debug.cfs_rq[30]:/.blocked_load_avg
8517470 Â34% +49.5% 12735720 Â20% sched_debug.cfs_rq[14]:/.min_vruntime
41 Â18% +102.4% 83 Â34% sched_debug.cpu#4.load
7998814 Â37% +51.5% 12121260 Â18% sched_debug.cfs_rq[4]:/.min_vruntime
9075023 Â34% +42.9% 12971768 Â20% sched_debug.cfs_rq[13]:/.min_vruntime
49 Â14% -36.9% 31 Â13% sched_debug.cfs_rq[22]:/.load
37 Â15% +120.4% 83 Â34% sched_debug.cfs_rq[4]:/.load
49 Â14% -35.8% 31 Â14% sched_debug.cpu#22.load
1770212 Â34% +52.6% 2701111 Â15% sched_debug.cpu#23.sched_count
1740470 Â37% +53.9% 2678976 Â20% sched_debug.cpu#14.sched_count
25 Â17% +70.7% 42 Â23% sched_debug.cpu#13.cpu_load[1]
2318 Â19% +51.0% 3500 Â22% sched_debug.cfs_rq[24]:/.blocked_load_avg
27 Â 6% +34.1% 36 Â13% sched_debug.cpu#11.cpu_load[4]
1398 Â17% +176.5% 3867 Â21% sched_debug.cfs_rq[14]:/.tg_load_contrib
2365 Â24% +69.8% 4014 Â28% sched_debug.cfs_rq[31]:/.blocked_load_avg
2345 Â25% +92.5% 4513 Â23% sched_debug.cfs_rq[30]:/.tg_load_contrib
1629777 Â35% +61.5% 2631445 Â19% sched_debug.cpu#29.ttwu_count
1402021 Â37% +61.0% 2256755 Â12% sched_debug.cpu#24.ttwu_local
1 Â 0% +233.3% 3 Â37% sched_debug.cfs_rq[13]:/.nr_spread_over
344797 Â40% +200.2% 1035150 Â36% proc-vmstat.pgfault
1875924 Â36% +67.9% 3149828 Â17% sched_debug.cpu#24.nr_switches
1875929 Â36% +67.9% 3149836 Â17% sched_debug.cpu#24.sched_count
1526264 Â37% +51.3% 2308860 Â16% sched_debug.cpu#6.ttwu_count
4153 Â23% -31.6% 2839 Â27% sched_debug.cfs_rq[15]:/.tg_load_contrib
2465 Â21% +53.8% 3791 Â23% sched_debug.cfs_rq[9]:/.blocked_load_avg
2091990 Â34% +48.7% 3110913 Â21% sched_debug.cpu#29.sched_count
27940 Â41% +108.4% 58221 Â16% sched_debug.cpu#23.sched_goidle
7730782 Â34% +54.0% 11906817 Â17% sched_debug.cfs_rq[5]:/.min_vruntime
21 Â 5% +71.9% 36 Â21% sched_debug.cpu#28.cpu_load[2]
21 Â 7% +63.1% 35 Â19% sched_debug.cpu#28.cpu_load[3]
2436 Â21% +66.9% 4068 Â29% sched_debug.cfs_rq[13]:/.tg_load_contrib
1605358 Â37% +63.4% 2623281 Â18% sched_debug.cpu#5.sched_count
1739895 Â37% +53.7% 2675024 Â20% sched_debug.cpu#14.nr_switches
1696861 Â41% +58.0% 2680333 Â14% sched_debug.cpu#24.ttwu_count
1935125 Â38% +44.9% 2803150 Â20% sched_debug.cpu#10.nr_switches
22 Â 9% +56.1% 34 Â 5% sched_debug.cfs_rq[11]:/.runnable_load_avg
21 Â13% +56.3% 33 Â18% sched_debug.cpu#28.cpu_load[4]
66 Â45% -54.0% 30 Â34% sched_debug.cpu#6.nr_uninterruptible
1421075 Â39% +61.3% 2292004 Â17% sched_debug.cpu#2.ttwu_count
1675341 Â41% +61.3% 2701696 Â16% sched_debug.cpu#6.nr_switches
21 Â 5% +64.6% 35 Â20% sched_debug.cpu#28.cpu_load[1]
1589299 Â34% +49.4% 2374588 Â19% sched_debug.cpu#9.ttwu_count
11 Â19% +118.2% 24 Â20% sched_debug.cfs_rq[25]:/.nr_spread_over
2366 Â24% +169.5% 6378 Â36% proc-vmstat.nr_shmem
28 Â 3% +33.7% 38 Â10% sched_debug.cpu#11.cpu_load[3]
2348 Â19% +50.6% 3537 Â21% sched_debug.cfs_rq[24]:/.tg_load_contrib
30 Â18% +23.9% 38 Â13% sched_debug.cpu#11.cpu_load[1]
29 Â 8% +29.2% 38 Â11% sched_debug.cpu#11.cpu_load[2]
33 Â 9% -16.2% 27 Â11% sched_debug.cfs_rq[7]:/.runnable_load_avg
20 Â16% +56.7% 31 Â10% sched_debug.cpu#28.cpu_load[0]
57 Â47% +111.7% 120 Â10% cpuidle.POLL.usage
1164777 Â 9% -18.9% 945177 Â15% sched_debug.cpu#2.avg_idle
1124134 Â 5% -31.5% 769685 Â30% sched_debug.cpu#3.avg_idle
1722266 Â37% +61.5% 2781561 Â17% sched_debug.cpu#22.sched_count
38 Â20% +62.6% 62 Â38% sched_debug.cpu#19.cpu_load[0]
1765711 Â34% +52.6% 2693739 Â15% sched_debug.cpu#23.nr_switches
18664 Â 3% -14.0% 16047 Â 7% sched_debug.cpu#17.curr->pid
43 Â30% -36.2% 27 Â11% sched_debug.cpu#21.cpu_load[0]
24 Â 8% +27.4% 31 Â 5% sched_debug.cfs_rq[10]:/.runnable_load_avg
2885 Â 7% +17.7% 3394 Â 7% sched_debug.cfs_rq[29]:/.blocked_load_avg
2914 Â 7% +17.6% 3427 Â 7% sched_debug.cfs_rq[29]:/.tg_load_contrib
1939082 Â38% +44.7% 2806802 Â20% sched_debug.cpu#10.sched_count
1601460 Â37% +63.3% 2615545 Â18% sched_debug.cpu#5.nr_switches
27 Â 3% +22.9% 34 Â 4% sched_debug.cpu#5.cpu_load[0]
7789960 Â35% +53.1% 11926079 Â17% sched_debug.cfs_rq[3]:/.min_vruntime
2017783 Â38% +45.5% 2934975 Â20% sched_debug.cpu#31.sched_count
14307 Â15% +45.4% 20799 Â 3% sched_debug.cpu#28.curr->pid
18 Â27% +67.9% 31 Â 9% sched_debug.cfs_rq[28]:/.runnable_load_avg
23 Â21% +61.4% 37 Â27% sched_debug.cpu#13.cpu_load[0]
43 Â27% -40.5% 26 Â16% sched_debug.cfs_rq[21]:/.runnable_load_avg
21 Â20% +65.1% 34 Â 1% sched_debug.cfs_rq[28]:/.load
22 Â46% +67.2% 37 Â 8% sched_debug.cfs_rq[9]:/.load
8190727 Â36% +52.5% 12487628 Â19% sched_debug.cfs_rq[19]:/.min_vruntime
2015494 Â34% +45.1% 2924514 Â21% sched_debug.cpu#30.nr_switches
20 Â18% +154.1% 51 Â46% sched_debug.cpu#28.load
2015322 Â38% +45.5% 2932466 Â20% sched_debug.cpu#31.nr_switches
7652403 Â34% +59.9% 12239590 Â16% sched_debug.cfs_rq[6]:/.min_vruntime
2255 Â34% +41.1% 3181 Â12% sched_debug.cfs_rq[21]:/.blocked_load_avg
15 Â18% -55.6% 6 Â46% sched_debug.cfs_rq[10]:/.nr_spread_over
23 Â23% -30.0% 16 Â11% sched_debug.cpu#0.cpu_load[0]
90586 Â21% +133.3% 211300 Â18% sched_debug.cpu#20.nr_load_updates
46776 Â36% +53.4% 71746 Â34% sched_debug.cpu#11.sched_goidle
8541800 Â32% +54.3% 13181879 Â19% sched_debug.cfs_rq[24]:/.min_vruntime
2017977 Â34% +45.1% 2927629 Â21% sched_debug.cpu#30.sched_count
8808565 Â34% +44.7% 12746465 Â20% sched_debug.cfs_rq[10]:/.min_vruntime
1366 Â18% +180.9% 3838 Â21% sched_debug.cfs_rq[14]:/.blocked_load_avg
2241 Â45% -60.1% 895 Â16% sched_debug.cfs_rq[1]:/.tg_load_contrib
25 Â28% +48.0% 37 Â 4% sched_debug.cpu#15.cpu_load[3]
2209 Â46% -60.9% 863 Â16% sched_debug.cfs_rq[1]:/.blocked_load_avg
42 Â 2% -28.1% 30 Â 4% sched_debug.cpu#3.cpu_load[0]
1597804 Â37% +62.3% 2593330 Â17% sched_debug.cpu#7.nr_switches
25 Â32% +40.3% 36 Â 4% sched_debug.cpu#15.cpu_load[1]
28687 Â44% +120.7% 63303 Â20% sched_debug.cpu#7.sched_goidle
25 Â26% +40.8% 35 Â 4% sched_debug.cpu#15.cpu_load[4]
24 Â30% +51.4% 37 Â 2% sched_debug.cpu#15.cpu_load[2]
8082735 Â35% +55.5% 12567911 Â16% sched_debug.cfs_rq[22]:/.min_vruntime
9052400 Â33% +39.9% 12660242 Â19% sched_debug.cfs_rq[11]:/.min_vruntime
8894 Â16% +146.7% 21946 Â34% meminfo.Shmem
1412482 Â33% +60.2% 2262381 Â19% sched_debug.cpu#29.ttwu_local
2091479 Â34% +48.7% 3110611 Â21% sched_debug.cpu#29.nr_switches
2403 Â22% +67.5% 4024 Â29% sched_debug.cfs_rq[13]:/.blocked_load_avg
1603334 Â38% +62.1% 2599754 Â17% sched_debug.cpu#7.sched_count
8734614 Â35% +42.6% 12456632 Â21% sched_debug.cfs_rq[26]:/.min_vruntime
5482284 Â22% -33.8% 3626939 Â17% sched_debug.cpu#31.max_idle_balance_cost
38 Â11% -20.0% 30 Â 4% sched_debug.cpu#3.cpu_load[3]
2513 Â21% +52.7% 3838 Â23% sched_debug.cfs_rq[9]:/.tg_load_contrib
58410910 Â12% -30.0% 40912772 Â28% cpuidle.C1E-SNB.time
7962719 Â38% +54.2% 12276900 Â16% sched_debug.cfs_rq[21]:/.min_vruntime
16966 Â 3% +12.4% 19071 Â 6% sched_debug.cpu#1.curr->pid
8896468 Â34% +45.4% 12938806 Â19% sched_debug.cfs_rq[28]:/.min_vruntime
4055 Â28% -35.9% 2598 Â 7% sched_debug.cfs_rq[22]:/.blocked_load_avg
8798874 Â33% +46.1% 12851935 Â19% sched_debug.cfs_rq[31]:/.min_vruntime
8162157 Â31% +48.5% 12122503 Â20% sched_debug.cfs_rq[23]:/.min_vruntime
5167674 Â28% -37.4% 3233547 Â 3% sched_debug.cpu#24.max_idle_balance_cost
8545579 Â32% +47.5% 12602337 Â19% sched_debug.cfs_rq[17]:/.min_vruntime
1716393 Â38% +62.0% 2780327 Â17% sched_debug.cpu#22.nr_switches
4096 Â28% -34.9% 2668 Â 7% sched_debug.cfs_rq[22]:/.tg_load_contrib
38 Â13% -40.4% 22 Â33% sched_debug.cpu#17.cpu_load[0]
6670497 Â32% -39.0% 4071219 Â16% sched_debug.cpu#25.max_idle_balance_cost
0.46 Â 9% +104.3% 0.95 Â18% turbostat.%c7
7671602 Â37% +56.7% 12020919 Â16% sched_debug.cfs_rq[1]:/.min_vruntime
16699 Â 8% +12.0% 18695 Â 6% sched_debug.cpu#8.curr->pid
3340 Â 9% +69.6% 5665 Â 3% cpuidle.C3-SNB.usage
4119 Â23% -32.2% 2794 Â27% sched_debug.cfs_rq[15]:/.blocked_load_avg
2.328e+08 Â 8% -22.9% 1.796e+08 Â 6% cpuidle.C1-SNB.time
102235 Â10% +40.9% 144065 Â15% sched_debug.cpu#0.nr_load_updates
3930107 Â49% -52.5% 1866005 Â25% sched_debug.cpu#22.avg_idle
36 Â12% -16.4% 30 Â 5% sched_debug.cpu#3.cpu_load[4]
7631893 Â29% +61.4% 12317261 Â18% sched_debug.cfs_rq[20]:/.min_vruntime
7077210 Â34% +55.2% 10981933 Â16% sched_debug.cfs_rq[0]:/.min_vruntime
130538 Â30% +60.3% 209286 Â17% sched_debug.cpu#11.nr_load_updates
7732046 Â35% +53.2% 11845661 Â18% sched_debug.cfs_rq[2]:/.min_vruntime
7742195 Â35% +54.5% 11959035 Â17% sched_debug.cfs_rq[7]:/.min_vruntime
46 Â21% -31.7% 31 Â 6% sched_debug.cpu#3.cpu_load[1]
1134823 Â12% -33.2% 758329 Â22% sched_debug.cpu#11.avg_idle
13 Â16% +41.0% 18 Â20% sched_debug.cpu#28.nr_running
770906 Â48% +95.8% 1509648 Â29% numa-numastat.node1.numa_hit
769508 Â48% +96.0% 1508250 Â29% numa-numastat.node1.local_node
3.254e+08 Â 3% +35.0% 4.392e+08 Â10% cpuidle.C7-SNB.time
7873970 Â37% +53.7% 12100093 Â18% sched_debug.cfs_rq[18]:/.min_vruntime
9908 Â 4% -8.6% 9052 Â 2% slabinfo.kmalloc-192.num_objs
42 Â16% -25.4% 31 Â 5% sched_debug.cpu#3.cpu_load[2]
25 Â 5% -21.3% 19 Â11% sched_debug.cpu#22.nr_running
23 Â19% +58.6% 37 Â 3% sched_debug.cfs_rq[11]:/.load
18 Â32% +35.2% 24 Â 3% sched_debug.cpu#12.nr_running
33978 Â16% -51.3% 16531 Â 8% sched_debug.cfs_rq[0]:/.tg_load_avg
9908 Â 4% -8.6% 9052 Â 2% slabinfo.kmalloc-192.active_objs
17065 Â13% +17.7% 20090 Â 2% sched_debug.cpu#29.curr->pid
24 Â41% +47.2% 35 Â16% sched_debug.cfs_rq[29]:/.load
1509 Â 2% +16.3% 1754 Â10% proc-vmstat.nr_inactive_anon
275899 Â 2% +12.3% 309911 Â 1% softirqs.SCHED
331858 Â 6% +14.1% 378649 Â 6% numa-meminfo.node1.SUnreclaim
365515 Â 5% +13.8% 416065 Â 6% numa-meminfo.node1.Slab
58244 Â17% -34.6% 38103 Â 1% sched_debug.cfs_rq[1]:/.tg_load_avg
101161 Â 2% +7.2% 108412 Â 4% sched_debug.cfs_rq[26]:/.tg_load_avg
101605 Â 3% +7.2% 108920 Â 3% sched_debug.cfs_rq[29]:/.tg_load_avg
0.93 Â 5% +11.1% 1.04 Â 5% perf-profile.cpu-cycles.__srcu_read_lock.fsnotify.vfs_write.sys_write.system_call_fastpath
101319 Â 3% +7.3% 108756 Â 4% sched_debug.cfs_rq[28]:/.tg_load_avg
102248 Â 2% +6.5% 108895 Â 4% sched_debug.cfs_rq[27]:/.tg_load_avg
0.82 Â 4% +14.2% 0.94 Â 4% perf-profile.cpu-cycles.__kmalloc_node_track_caller.__kmalloc_reserve.__alloc_skb.sock_alloc_send_pskb.unix_stream_sendmsg
13729 Â 3% +10.6% 15183 Â 5% proc-vmstat.nr_slab_reclaimable
48264 Â 4% +12.6% 54361 Â 6% proc-vmstat.nr_active_anon
1.07 Â 5% +14.0% 1.22 Â 5% perf-profile.cpu-cycles.kmem_cache_free.kfree_skbmem.consume_skb.unix_stream_recvmsg.sock_aio_read
6025 Â 2% +12.7% 6788 Â 6% meminfo.Inactive(anon)
1065 Â 9% -11.3% 945 Â 0% numa-meminfo.node1.Unevictable
1065 Â 9% -11.5% 942 Â 0% numa-meminfo.node1.Mlocked
138913 Â 5% +7.5% 149308 Â 2% proc-vmstat.nr_slab_unreclaimable
1095284 Â 4% -10.1% 984542 Â 1% time.minor_page_faults

testbox/testcase/testparams: lkp-snb01/hackbench/50%-process-pipe

71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
-745 Â-5% -19.6% -598 Â-10% sched_debug.cpu#27.nr_uninterruptible
376 Â 4% +1543.5% 6185 Â46% numa-vmstat.node0.nr_shmem
-586 Â-11% -24.9% -440 Â-20% sched_debug.cpu#21.nr_uninterruptible
2734514 Â 7% -14.6% 2334144 Â 6% sched_debug.cfs_rq[16]:/.spread0
-383 Â-27% +45.3% -557 Â-10% sched_debug.cpu#22.nr_uninterruptible
1506 Â 4% +1544.1% 24759 Â46% numa-meminfo.node0.Shmem
-727 Â-11% -17.7% -598 Â-7% sched_debug.cpu#19.nr_uninterruptible
707 Â 3% -27.4% 513 Â14% sched_debug.cpu#2.nr_uninterruptible
222 Â17% +67.2% 371 Â 2% sched_debug.cpu#15.nr_uninterruptible
-442 Â-9% -19.7% -355 Â-10% sched_debug.cpu#25.nr_uninterruptible
71 Â21% +43.5% 102 Â22% sched_debug.cfs_rq[14]:/.tg_load_contrib
78 Â36% +119.2% 171 Â22% sched_debug.cfs_rq[30]:/.tg_load_contrib
26 Â 6% +16.2% 31 Â 6% sched_debug.cfs_rq[19]:/.load
33 Â 7% -13.0% 29 Â 5% sched_debug.cpu#5.load
234 Â46% -57.5% 99 Â26% sched_debug.cfs_rq[19]:/.tg_load_contrib
34 Â 5% -16.3% 29 Â12% sched_debug.cfs_rq[14]:/.load
60 Â19% +109.9% 126 Â18% sched_debug.cfs_rq[7]:/.tg_load_contrib
68 Â25% +155.3% 175 Â26% sched_debug.cfs_rq[20]:/.blocked_load_avg
32 Â 5% -12.5% 28 Â 5% sched_debug.cfs_rq[5]:/.runnable_load_avg
36 Â15% +107.3% 75 Â 4% sched_debug.cfs_rq[0]:/.blocked_load_avg
2660 Â 6% -64.0% 956 Â 3% cpuidle.POLL.usage
28999 Â28% +44.7% 41951 Â15% sched_debug.cpu#2.avg_idle
317 Â 9% +18.7% 376 Â10% sched_debug.cpu#7.nr_uninterruptible
29 Â 6% -14.6% 25 Â 4% sched_debug.cpu#21.cpu_load[0]
28 Â45% +236.0% 96 Â26% sched_debug.cfs_rq[7]:/.blocked_load_avg
32 Â 5% -17.3% 27 Â 0% sched_debug.cpu#5.cpu_load[0]
62112 Â 3% -28.2% 44609 Â27% numa-meminfo.node1.Active(anon)
32 Â 5% -16.5% 27 Â 0% sched_debug.cpu#5.cpu_load[1]
30 Â 8% -14.4% 25 Â 1% sched_debug.cfs_rq[21]:/.runnable_load_avg
8439 Â 0% +28.9% 10875 Â14% numa-vmstat.node1.numa_other
29 Â 8% -12.4% 26 Â 3% sched_debug.cpu#21.cpu_load[1]
97 Â16% +111.6% 206 Â21% sched_debug.cfs_rq[20]:/.tg_load_contrib
27 Â 4% +14.6% 31 Â 6% sched_debug.cfs_rq[19]:/.runnable_load_avg
30 Â 1% -6.5% 28 Â 3% sched_debug.cpu#25.cpu_load[3]
32 Â 3% -15.3% 27 Â 1% sched_debug.cpu#5.cpu_load[2]
29 Â 8% -12.4% 26 Â 3% sched_debug.cpu#21.cpu_load[2]
30 Â 5% -11.1% 26 Â 1% sched_debug.cpu#21.cpu_load[4]
30 Â 5% -11.1% 26 Â 1% sched_debug.cpu#21.cpu_load[3]
71 Â 6% +51.2% 107 Â 3% sched_debug.cfs_rq[0]:/.tg_load_contrib
29 Â 8% -13.5% 25 Â 4% sched_debug.cpu#21.load
15527 Â 2% -28.1% 11157 Â27% numa-vmstat.node1.nr_active_anon
39 Â38% +82.2% 71 Â34% sched_debug.cfs_rq[14]:/.blocked_load_avg
32 Â 3% -15.3% 27 Â 1% sched_debug.cpu#5.cpu_load[3]
32 Â 2% -13.4% 28 Â 2% sched_debug.cpu#5.cpu_load[4]
36 Â13% -17.3% 30 Â 4% sched_debug.cpu#10.load
24 Â12% +24.3% 30 Â 8% sched_debug.cfs_rq[20]:/.load
66 Â11% +93.5% 129 Â20% sched_debug.cfs_rq[28]:/.blocked_load_avg
24 Â 7% +24.7% 30 Â 8% sched_debug.cpu#20.load
30668 Â 8% +55.0% 47545 Â27% numa-meminfo.node0.Active(anon)
95 Â 8% +66.6% 159 Â16% sched_debug.cfs_rq[28]:/.tg_load_contrib
7667 Â 8% +55.0% 11885 Â27% numa-vmstat.node0.nr_active_anon
57027 Â24% -35.3% 36872 Â19% sched_debug.cpu#6.avg_idle
41749 Â20% +37.6% 57439 Â10% sched_debug.cpu#13.avg_idle
61077 Â19% -43.3% 34602 Â12% sched_debug.cpu#7.avg_idle
56966 Â 8% -24.7% 42880 Â19% sched_debug.cpu#11.avg_idle
24 Â10% +17.6% 29 Â 7% sched_debug.cfs_rq[31]:/.load
25 Â11% +17.1% 29 Â 6% sched_debug.cpu#31.load
62174 Â24% -35.0% 40444 Â23% sched_debug.cpu#19.avg_idle
55229 Â 0% -32.6% 37224 Â14% sched_debug.cpu#16.avg_idle
8 Â 5% +28.0% 10 Â11% sched_debug.cpu#20.nr_running
211665 Â 0% +12.1% 237174 Â 5% numa-meminfo.node0.FilePages
52916 Â 0% +12.0% 59289 Â 5% numa-vmstat.node0.nr_file_pages
48377 Â11% -37.3% 30344 Â20% sched_debug.cpu#4.avg_idle
3814 Â 3% +7.1% 4086 Â 5% sched_debug.cfs_rq[0]:/.tg_load_avg
12 Â 3% -18.4% 10 Â12% sched_debug.cpu#14.nr_running
6477 Â 6% -16.3% 5419 Â11% numa-vmstat.node1.nr_slab_reclaimable
25905 Â 6% -16.3% 21676 Â11% numa-meminfo.node1.SReclaimable
17880 Â 9% +24.3% 22230 Â10% numa-meminfo.node0.SReclaimable
4469 Â 9% +24.3% 5557 Â10% numa-vmstat.node0.nr_slab_reclaimable
1.11 Â 2% -14.1% 0.95 Â 3% perf-profile.cpu-cycles.selinux_file_permission.security_file_permission.rw_verify_area.vfs_write.sys_write
8769 Â 4% +11.2% 9751 Â 1% numa-vmstat.node0.nr_slab_unreclaimable
12 Â 6% -13.9% 10 Â 4% sched_debug.cpu#5.nr_running
97815 Â 3% -8.4% 89618 Â 3% numa-meminfo.node1.Slab
52957 Â 5% +15.7% 61254 Â 3% numa-meminfo.node0.Slab
31 Â 3% -8.5% 28 Â 3% sched_debug.cpu#25.cpu_load[4]
252921 Â 0% -10.1% 227367 Â 5% numa-meminfo.node1.FilePages
63234 Â 0% -10.1% 56843 Â 5% numa-vmstat.node1.nr_file_pages
35077 Â 4% +11.2% 39023 Â 1% numa-meminfo.node0.SUnreclaim
5759109 Â 0% -1.5% 5672917 Â 0% vmstat.system.cs
865590 Â 0% -1.2% 855126 Â 0% vmstat.system.in
1254 Â 0% -1.3% 1238 Â 0% time.user_time

testbox/testcase/testparams: lkp-snb01/hackbench/50%-process-socket

71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
813898.55 Â45% -100.0% 0.00 Â 0% sched_debug.cfs_rq[25]:/.MIN_vruntime
813898.55 Â45% -100.0% 0.00 Â 0% sched_debug.cfs_rq[25]:/.max_vruntime
2841487 Â 3% +14.1% 3241106 Â 3% sched_debug.cfs_rq[13]:/.spread0
2533569 Â 4% +17.2% 2968494 Â 9% sched_debug.cfs_rq[10]:/.spread0
2047785 Â 9% +20.0% 2456470 Â 9% sched_debug.cfs_rq[8]:/.spread0
13370 Â 6% -7.8% 12327 Â 5% sched_debug.cpu#27.curr->pid
468040 Â45% +92.8% 902473 Â33% sched_debug.cfs_rq[17]:/.MIN_vruntime
468040 Â45% +92.8% 902473 Â33% sched_debug.cfs_rq[17]:/.max_vruntime
26 Â 0% +10.3% 28 Â 1% sched_debug.cpu#26.cpu_load[4]
104 Â14% -31.5% 71 Â 9% sched_debug.cfs_rq[12]:/.tg_load_contrib
74 Â18% -40.4% 44 Â17% sched_debug.cfs_rq[12]:/.blocked_load_avg
26 Â 0% +10.3% 28 Â 1% sched_debug.cpu#26.cpu_load[3]
31 Â 7% +12.9% 35 Â 4% sched_debug.cfs_rq[5]:/.load
32 Â 3% -12.4% 28 Â 5% sched_debug.cfs_rq[14]:/.runnable_load_avg
29 Â 4% +14.9% 33 Â 5% sched_debug.cfs_rq[4]:/.runnable_load_avg
4208 Â 0% -99.8% 7 Â34% numa-numastat.node0.other_node
47 Â26% +99.3% 93 Â27% sched_debug.cfs_rq[9]:/.blocked_load_avg
126 Â 9% -32.2% 85 Â19% sched_debug.cfs_rq[4]:/.tg_load_contrib
25 Â 0% +14.7% 28 Â 7% sched_debug.cpu#26.cpu_load[0]
176 Â21% -68.2% 56 Â47% sched_debug.cfs_rq[20]:/.blocked_load_avg
110715 Â 6% -29.6% 77907 Â 0% numa-meminfo.node1.Active
2546 Â 1% -46.8% 1355 Â 4% cpuidle.POLL.usage
99 Â 9% -35.4% 64 Â25% sched_debug.cfs_rq[3]:/.tg_load_contrib
69536 Â 9% -48.0% 36187 Â 2% numa-meminfo.node1.Active(anon)
13151 Â 5% -8.4% 12052 Â 4% sched_debug.cpu#28.curr->pid
30 Â 2% -8.9% 27 Â 1% sched_debug.cpu#13.cpu_load[0]
24 Â 9% +17.6% 29 Â 4% sched_debug.cfs_rq[21]:/.runnable_load_avg
27 Â14% -17.1% 22 Â13% sched_debug.cfs_rq[28]:/.load
8305 Â 0% +43.7% 11938 Â 2% numa-vmstat.node1.numa_other
24 Â 7% +12.3% 27 Â 3% sched_debug.cpu#21.cpu_load[1]
26 Â12% -15.0% 22 Â13% sched_debug.cpu#28.load
210 Â16% -60.6% 83 Â32% sched_debug.cfs_rq[20]:/.tg_load_contrib
13282 Â 1% -10.5% 11888 Â 1% sched_debug.cpu#26.curr->pid
24 Â13% +19.2% 29 Â 4% sched_debug.cfs_rq[21]:/.load
24 Â 5% +13.7% 27 Â 4% sched_debug.cpu#21.cpu_load[2]
25 Â 3% +13.3% 28 Â 4% sched_debug.cpu#21.cpu_load[4]
24 Â 3% +13.5% 28 Â 2% sched_debug.cpu#21.cpu_load[3]
24 Â15% +19.2% 29 Â 4% sched_debug.cpu#21.load
17376 Â 9% -48.0% 9038 Â 2% numa-vmstat.node1.nr_active_anon
34 Â17% -22.5% 26 Â 6% sched_debug.cfs_rq[20]:/.runnable_load_avg
34 Â22% -26.5% 25 Â11% sched_debug.cfs_rq[20]:/.load
29 Â 3% -15.9% 24 Â13% sched_debug.cpu#20.load
24649 Â26% +134.5% 57798 Â 1% numa-meminfo.node0.Active(anon)
28 Â 3% -12.8% 25 Â 6% sched_debug.cpu#20.cpu_load[2]
77 Â16% +60.8% 124 Â20% sched_debug.cfs_rq[9]:/.tg_load_contrib
6158 Â26% +134.7% 14451 Â 1% numa-vmstat.node0.nr_active_anon
97 Â13% -46.7% 51 Â29% sched_debug.cfs_rq[4]:/.blocked_load_avg
67282 Â10% +47.8% 99432 Â 1% numa-meminfo.node0.Active
31 Â 6% +12.8% 35 Â 3% sched_debug.cpu#3.load
32887 Â31% +45.2% 47767 Â 6% sched_debug.cpu#5.avg_idle
40530 Â17% +44.0% 58353 Â13% sched_debug.cpu#21.avg_idle
49686 Â22% -35.8% 31895 Â10% sched_debug.cpu#22.avg_idle
30 Â 3% -9.8% 27 Â 3% sched_debug.cfs_rq[8]:/.runnable_load_avg
35312 Â 7% +36.8% 48309 Â13% sched_debug.cpu#18.avg_idle
39897 Â 7% -25.4% 29761 Â 5% sched_debug.cpu#8.avg_idle
40180 Â 5% +19.0% 47800 Â 3% sched_debug.cpu#16.avg_idle
216166 Â 1% +16.7% 252197 Â 1% numa-meminfo.node0.FilePages
54041 Â 1% +16.7% 63050 Â 1% numa-vmstat.node0.nr_file_pages
11 Â 7% -21.2% 8 Â14% sched_debug.cpu#14.nr_running
37603 Â22% +25.8% 47317 Â 7% sched_debug.cpu#20.avg_idle
31 Â 7% -13.8% 27 Â 8% sched_debug.cpu#23.load
293929 Â 4% +6.2% 312221 Â 2% softirqs.SCHED
741 Â 8% -14.0% 637 Â 7% slabinfo.buffer_head.active_objs
741 Â 8% -14.0% 637 Â 7% slabinfo.buffer_head.num_objs
31 Â10% -12.9% 27 Â 8% sched_debug.cfs_rq[23]:/.load
249967 Â 1% -14.5% 213710 Â 1% numa-meminfo.node1.FilePages
62493 Â 1% -14.5% 53427 Â 1% numa-vmstat.node1.nr_file_pages
3231743 Â 0% +1.0% 3264953 Â 0% vmstat.system.cs
559279 Â 0% +1.2% 566102 Â 0% vmstat.system.in

testbox/testcase/testparams: lkp-snb01/hackbench/50%-threads-pipe

71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
-231 Â-42% +66.1% -383 Â 0% sched_debug.cpu#29.nr_uninterruptible
352987 Â34% -62.6% 132004 Â49% sched_debug.cfs_rq[1]:/.spread0
2000694 Â10% -14.8% 1703642 Â 6% sched_debug.cfs_rq[20]:/.spread0
2669321 Â 7% -25.5% 1987489 Â 9% sched_debug.cfs_rq[26]:/.spread0
2218008 Â 5% -11.1% 1972491 Â 8% sched_debug.cfs_rq[27]:/.spread0
429280 Â32% +76.9% 759410 Â10% sched_debug.cfs_rq[7]:/.spread0
2416889 Â 2% -12.2% 2121635 Â 5% sched_debug.cfs_rq[28]:/.spread0
-388 Â-20% -32.8% -261 Â-3% sched_debug.cpu#30.nr_uninterruptible
353 Â 2% +13.4% 400 Â 6% sched_debug.cpu#9.nr_uninterruptible
-264 Â-11% -20.7% -209 Â-4% sched_debug.cpu#31.nr_uninterruptible
795535 Â27% -38.8% 486872 Â18% sched_debug.cfs_rq[9]:/.spread0
143 Â20% +24.0% 177 Â 8% sched_debug.cpu#15.nr_uninterruptible
197 Â22% -40.4% 117 Â34% sched_debug.cfs_rq[19]:/.blocked_load_avg
888413 Â18% -52.4% 422792 Â 4% sched_debug.cfs_rq[8]:/.spread0
-317 Â-14% -33.3% -211 Â-47% sched_debug.cpu#25.nr_uninterruptible
157 Â29% +63.7% 257 Â29% sched_debug.cfs_rq[27]:/.blocked_load_avg
96 Â28% -34.3% 63 Â24% sched_debug.cfs_rq[12]:/.tg_load_contrib
87 Â19% -54.6% 39 Â16% sched_debug.cfs_rq[24]:/.blocked_load_avg
411 Â41% -80.0% 82 Â48% sched_debug.cfs_rq[25]:/.blocked_load_avg
106 Â13% -38.1% 65 Â30% sched_debug.cfs_rq[30]:/.tg_load_contrib
26 Â 6% +17.5% 31 Â 3% sched_debug.cpu#25.cpu_load[1]
31 Â 6% -10.5% 28 Â 6% sched_debug.cfs_rq[27]:/.runnable_load_avg
31 Â 5% -14.0% 26 Â 9% sched_debug.cfs_rq[27]:/.load
439 Â39% -74.2% 113 Â36% sched_debug.cfs_rq[25]:/.tg_load_contrib
227 Â17% -34.5% 148 Â27% sched_debug.cfs_rq[19]:/.tg_load_contrib
1056797 Â 4% +13.7% 1201745 Â 6% sched_debug.cpu#21.sched_goidle
189 Â23% +50.8% 286 Â26% sched_debug.cfs_rq[27]:/.tg_load_contrib
31 Â 3% -14.9% 26 Â 9% sched_debug.cpu#27.load
1130048 Â 3% +10.5% 1248905 Â 2% sched_debug.cpu#24.sched_goidle
902294 Â 2% +11.2% 1003068 Â 5% sched_debug.cpu#14.sched_goidle
116 Â13% -41.0% 68 Â 9% sched_debug.cfs_rq[24]:/.tg_load_contrib
27 Â 3% +12.0% 31 Â 4% sched_debug.cpu#25.cpu_load[2]
1783 Â 3% -68.9% 555 Â 3% cpuidle.POLL.usage
78441 Â32% -40.5% 46653 Â15% sched_debug.cpu#3.avg_idle
765606 Â 7% +11.7% 855212 Â 4% sched_debug.cpu#8.sched_goidle
31 Â 9% -13.7% 27 Â 7% sched_debug.cpu#17.load
555616 Â 7% +14.5% 636021 Â 8% cpuidle.C3-SNB.time
877042 Â 5% +17.4% 1029798 Â 2% sched_debug.cpu#13.sched_goidle
47197 Â 2% -27.6% 34170 Â37% numa-meminfo.node1.Active(anon)
25 Â 3% +34.7% 33 Â10% sched_debug.cfs_rq[25]:/.load
26 Â 6% +20.0% 32 Â 2% sched_debug.cpu#25.cpu_load[0]
26 Â 6% +16.2% 31 Â 6% sched_debug.cfs_rq[25]:/.runnable_load_avg
30 Â 4% -13.2% 26 Â 3% sched_debug.cfs_rq[21]:/.load
28 Â 5% +9.5% 30 Â 4% sched_debug.cpu#25.cpu_load[3]
64 Â27% +104.2% 130 Â18% sched_debug.cfs_rq[0]:/.tg_load_contrib
1137850 Â 3% +16.6% 1326788 Â 5% sched_debug.cpu#31.sched_goidle
30 Â 3% -14.1% 26 Â 1% sched_debug.cpu#21.load
11804 Â 2% -27.6% 8548 Â37% numa-vmstat.node1.nr_active_anon
844857 Â 4% +11.3% 940540 Â 2% sched_debug.cpu#11.sched_goidle
1181283 Â 3% +10.3% 1302776 Â 4% sched_debug.cpu#30.sched_goidle
56347 Â23% -32.8% 37850 Â16% sched_debug.cpu#1.avg_idle
21182 Â 5% +59.6% 33804 Â37% numa-meminfo.node0.Active(anon)
31 Â 5% -7.5% 28 Â 3% sched_debug.cpu#20.cpu_load[2]
79877 Â 7% +13.7% 90800 Â 3% meminfo.DirectMap4k
5294 Â 5% +59.7% 8452 Â37% numa-vmstat.node0.nr_active_anon
69161 Â25% -28.5% 49469 Â26% sched_debug.cpu#28.avg_idle
70975 Â 9% -44.3% 39542 Â25% sched_debug.cpu#6.avg_idle
81808 Â18% -20.6% 64989 Â23% sched_debug.cpu#21.avg_idle
71797 Â13% -15.9% 60409 Â11% sched_debug.cpu#22.avg_idle
74546 Â19% -32.3% 50439 Â21% sched_debug.cpu#17.avg_idle
16769 Â 4% -15.6% 14155 Â13% numa-meminfo.node1.AnonPages
93643 Â19% -47.8% 48884 Â11% sched_debug.cpu#8.avg_idle
31 Â 9% -15.8% 26 Â 4% sched_debug.cfs_rq[31]:/.load
31 Â10% -17.0% 26 Â 5% sched_debug.cpu#31.load
1184850 Â 2% +10.1% 1304822 Â 1% sched_debug.cpu#29.sched_goidle
19014825 Â 4% +12.5% 21391786 Â 4% cpuidle.C7-SNB.time
8 Â 5% +38.5% 12 Â13% sched_debug.cpu#25.nr_running
65471 Â10% -33.0% 43881 Â13% sched_debug.cpu#15.avg_idle
4192 Â 4% -15.7% 3534 Â13% numa-vmstat.node1.nr_anon_pages
32 Â 6% -14.6% 27 Â 6% sched_debug.cpu#29.load
4066 Â 6% -13.9% 3502 Â 8% sched_debug.cfs_rq[0]:/.tg_load_avg
84719 Â 6% -36.5% 53778 Â32% sched_debug.cpu#20.avg_idle
27 Â 1% -9.6% 25 Â 3% sched_debug.cfs_rq[30]:/.load
32 Â 6% -17.7% 26 Â 7% sched_debug.cfs_rq[29]:/.load
225864 Â 0% +15.1% 259863 Â 1% softirqs.SCHED
11 Â 4% -17.6% 9 Â 5% sched_debug.cpu#21.nr_running
11 Â 8% -20.0% 9 Â 5% sched_debug.cpu#29.nr_running
3817 Â 8% -13.6% 3298 Â 2% sched_debug.cfs_rq[24]:/.tg_load_avg
3864 Â 9% -10.6% 3455 Â 5% sched_debug.cfs_rq[15]:/.tg_load_avg
3845 Â 9% -12.8% 3353 Â 3% sched_debug.cfs_rq[19]:/.tg_load_avg
3800 Â 7% -13.6% 3284 Â 1% sched_debug.cfs_rq[25]:/.tg_load_avg
3830 Â 9% -13.1% 3329 Â 2% sched_debug.cfs_rq[20]:/.tg_load_avg
3819 Â 8% -12.8% 3330 Â 2% sched_debug.cfs_rq[21]:/.tg_load_avg
3829 Â 9% -11.7% 3382 Â 4% sched_debug.cfs_rq[18]:/.tg_load_avg
3816 Â 8% -10.2% 3428 Â 8% sched_debug.cfs_rq[9]:/.tg_load_avg
3806 Â 8% -13.1% 3307 Â 2% sched_debug.cfs_rq[23]:/.tg_load_avg
3852 Â 8% -11.3% 3417 Â 7% sched_debug.cfs_rq[12]:/.tg_load_avg
3783 Â 9% -14.2% 3247 Â 2% sched_debug.cfs_rq[30]:/.tg_load_avg
3790 Â 9% -14.6% 3238 Â 2% sched_debug.cfs_rq[31]:/.tg_load_avg
3893 Â 5% -12.0% 3426 Â 9% sched_debug.cfs_rq[8]:/.tg_load_avg
3813 Â 8% -12.7% 3330 Â 2% sched_debug.cfs_rq[22]:/.tg_load_avg
3992 Â 5% -13.7% 3446 Â 9% sched_debug.cfs_rq[1]:/.tg_load_avg
3787 Â 9% -13.9% 3259 Â 1% sched_debug.cfs_rq[26]:/.tg_load_avg
1233 Â 1% +9.6% 1351 Â 2% uptime.idle
3822 Â 8% -10.1% 3436 Â 7% sched_debug.cfs_rq[11]:/.tg_load_avg
5103 Â 3% +11.2% 5673 Â 8% numa-vmstat.node0.nr_anon_pages
3830 Â 8% -9.8% 3453 Â 7% sched_debug.cfs_rq[10]:/.tg_load_avg
3790 Â 8% -14.1% 3258 Â 2% sched_debug.cfs_rq[29]:/.tg_load_avg
3800 Â 8% -14.1% 3263 Â 2% sched_debug.cfs_rq[28]:/.tg_load_avg
3873 Â 8% -10.7% 3457 Â 7% sched_debug.cfs_rq[13]:/.tg_load_avg
3880 Â 8% -10.0% 3493 Â 6% sched_debug.cfs_rq[14]:/.tg_load_avg
1.76 Â 3% -9.1% 1.60 Â 2% perf-profile.cpu-cycles.__schedule.schedule.pipe_wait.pipe_read.do_sync_read
1.33 Â 9% -14.8% 1.13 Â 2% perf-profile.cpu-cycles.avc_has_perm.inode_has_perm.file_has_perm.selinux_file_permission.security_file_permission
28 Â 7% -12.9% 24 Â 5% sched_debug.cpu#30.load
3914 Â 5% -11.9% 3448 Â 9% sched_debug.cfs_rq[6]:/.tg_load_avg
3908 Â 5% -11.6% 3455 Â 9% sched_debug.cfs_rq[7]:/.tg_load_avg
20420 Â 3% +11.1% 22683 Â 8% numa-meminfo.node0.AnonPages
10 Â 4% -16.1% 8 Â 5% sched_debug.cpu#30.nr_running
1.062e+09 Â 0% -2.9% 1.031e+09 Â 0% time.involuntary_context_switches
2.994e+09 Â 0% -1.4% 2.951e+09 Â 0% time.voluntary_context_switches
6679284 Â 0% -2.0% 6547702 Â 0% vmstat.system.cs
986879 Â 0% -1.2% 975385 Â 0% vmstat.system.in
828 Â 0% -1.6% 815 Â 0% time.user_time

testbox/testcase/testparams: lkp-snb01/hackbench/50%-threads-socket

71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
1170822 Â 5% +17.2% 1371820 Â 7% sched_debug.cfs_rq[3]:/.spread0
819914 Â12% +19.2% 977633 Â 6% sched_debug.cfs_rq[2]:/.spread0
28 Â 2% -9.5% 25 Â 1% sched_debug.cpu#16.cpu_load[2]
122 Â12% -32.8% 82 Â14% sched_debug.cfs_rq[11]:/.tg_load_contrib
118 Â36% +332.2% 510 Â41% sched_debug.cfs_rq[1]:/.nr_spread_over
65 Â16% +43.1% 93 Â 9% sched_debug.cfs_rq[15]:/.tg_load_contrib
33 Â 5% -16.8% 28 Â 8% sched_debug.cfs_rq[3]:/.runnable_load_avg
80 Â18% -32.6% 54 Â32% sched_debug.cfs_rq[7]:/.tg_load_contrib
37 Â 5% -17.7% 31 Â 5% sched_debug.cfs_rq[2]:/.load
28 Â 3% -11.8% 25 Â 3% sched_debug.cpu#16.cpu_load[1]
28 Â 1% -12.9% 24 Â 3% sched_debug.cpu#16.cpu_load[0]
70 Â34% +67.1% 117 Â17% sched_debug.cfs_rq[24]:/.tg_load_contrib
26 Â 7% +34.2% 35 Â 3% sched_debug.cfs_rq[15]:/.load
26 Â 7% +33.7% 35 Â 5% sched_debug.cpu#15.load
616 Â 3% +9.3% 674 Â 7% numa-vmstat.node0.nr_kernel_stack
26 Â 4% +15.2% 30 Â 8% sched_debug.cpu#10.cpu_load[1]
2259 Â 2% -47.9% 1178 Â 3% cpuidle.POLL.usage
27 Â 4% -8.4% 25 Â 1% sched_debug.cpu#19.cpu_load[0]
83 Â29% +46.2% 121 Â23% sched_debug.cfs_rq[29]:/.blocked_load_avg
110 Â23% +36.0% 150 Â22% sched_debug.cfs_rq[29]:/.tg_load_contrib
29 Â 1% +10.2% 32 Â 2% sched_debug.cpu#15.cpu_load[0]
24 Â 8% +15.1% 28 Â 5% sched_debug.cfs_rq[28]:/.runnable_load_avg
26 Â12% +22.8% 32 Â 5% sched_debug.cfs_rq[10]:/.load
27 Â 1% +13.4% 31 Â 7% sched_debug.cpu#10.cpu_load[3]
37 Â 6% -17.7% 31 Â 5% sched_debug.cpu#2.load
27 Â 3% +10.8% 30 Â 8% sched_debug.cpu#10.cpu_load[4]
25 Â 1% +11.8% 28 Â 7% sched_debug.cfs_rq[25]:/.runnable_load_avg
28 Â 1% +16.3% 33 Â 1% sched_debug.cfs_rq[15]:/.runnable_load_avg
27 Â 3% -7.3% 25 Â 1% sched_debug.cfs_rq[19]:/.runnable_load_avg
35 Â 1% -10.3% 32 Â 0% sched_debug.cpu#1.cpu_load[0]
65 Â16% +64.8% 107 Â18% sched_debug.cfs_rq[1]:/.tg_load_contrib
29 Â 2% +14.9% 33 Â 7% sched_debug.cpu#15.cpu_load[3]
29 Â37% +158.6% 75 Â26% sched_debug.cfs_rq[1]:/.blocked_load_avg
29 Â 4% +11.4% 32 Â 3% sched_debug.cpu#15.cpu_load[1]
28 Â 1% +16.3% 33 Â 7% sched_debug.cpu#15.cpu_load[4]
32 Â 3% -19.4% 26 Â 7% sched_debug.cpu#11.load
35 Â 2% -9.4% 32 Â 0% sched_debug.cfs_rq[1]:/.runnable_load_avg
26 Â12% +20.3% 31 Â 3% sched_debug.cpu#10.load
317 Â34% -45.6% 172 Â43% sched_debug.cfs_rq[17]:/.tg_load_contrib
29 Â 4% +12.5% 33 Â 4% sched_debug.cpu#15.cpu_load[2]
74 Â20% -46.9% 39 Â24% sched_debug.cfs_rq[8]:/.blocked_load_avg
107 Â13% -35.6% 69 Â16% sched_debug.cfs_rq[8]:/.tg_load_contrib
89 Â16% -40.5% 53 Â25% sched_debug.cfs_rq[11]:/.blocked_load_avg
67243 Â10% -40.5% 39981 Â13% sched_debug.cpu#28.avg_idle
53609 Â22% -33.5% 35652 Â17% sched_debug.cpu#12.avg_idle
695 Â 2% -8.1% 638 Â 7% numa-vmstat.node1.nr_kernel_stack
45204 Â15% -35.6% 29093 Â13% sched_debug.cpu#5.avg_idle
35 Â31% +67.9% 59 Â15% sched_debug.cfs_rq[15]:/.blocked_load_avg
5559 Â 3% -7.9% 5118 Â 7% numa-meminfo.node1.KernelStack
64428 Â12% -38.1% 39863 Â 9% sched_debug.cpu#18.avg_idle
43 Â22% -30.2% 30 Â 9% sched_debug.cfs_rq[3]:/.load
33 Â 6% -11.1% 29 Â 4% sched_debug.cpu#3.cpu_load[1]
8 Â10% +38.5% 12 Â 6% sched_debug.cpu#15.nr_running
62466 Â19% -34.7% 40779 Â 8% sched_debug.cpu#19.avg_idle
56228 Â15% -24.7% 42343 Â 7% sched_debug.cpu#16.avg_idle
8 Â 0% +29.2% 10 Â 9% sched_debug.cpu#10.nr_running
54061 Â16% -34.5% 35393 Â22% sched_debug.cpu#15.avg_idle
39436 Â22% -27.7% 28510 Â17% sched_debug.cpu#10.avg_idle
32 Â 3% -17.5% 26 Â 9% sched_debug.cfs_rq[11]:/.load
24 Â 5% +15.1% 28 Â 7% sched_debug.cpu#31.cpu_load[1]
274999 Â 0% +11.6% 306774 Â 0% softirqs.SCHED
24 Â 3% +17.6% 29 Â 5% sched_debug.cpu#31.cpu_load[2]
24 Â 3% +17.6% 29 Â 5% sched_debug.cpu#31.cpu_load[3]
25 Â 4% +13.2% 28 Â 6% sched_debug.cfs_rq[31]:/.runnable_load_avg
4050 Â 9% -10.0% 3646 Â 3% sched_debug.cfs_rq[1]:/.tg_load_avg
3914 Â 8% -10.6% 3499 Â 5% sched_debug.cfs_rq[13]:/.tg_load_avg
3915 Â 8% -10.7% 3494 Â 4% sched_debug.cfs_rq[14]:/.tg_load_avg
29 Â 8% -13.6% 25 Â 1% sched_debug.cpu#30.load
25 Â 1% +17.1% 29 Â 5% sched_debug.cpu#31.cpu_load[4]
4064 Â 8% -12.2% 3569 Â 5% sched_debug.cfs_rq[4]:/.tg_load_avg
4937 Â 3% +9.2% 5392 Â 6% numa-meminfo.node0.KernelStack
692520 Â 0% +0.9% 698981 Â 0% vmstat.system.in
680 Â 0% -1.6% 669 Â 0% time.user_time

The testbox configurations are:

brickland1: Brickland Ivy Bridge-EX
Memory: 128G

lkp-sb03: Sandy Bridge-EP
Memory: 64G

lkp-snb01: Sandy Bridge-EP
Memory: 32G

lkp-nex04: Nehalem-EX
Memory: 256G

lkp-a06: Atom
Memory: 8G

xps: Nehalem
Memory: 4G

lkp-a04: Atom
Memory: 8G

xps2: Nehalem
Memory: 4G

lkp-a05: Atom
Memory: 8G


To reproduce:

apt-get install ruby ruby-oj
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/setup-local job.yaml # the job file attached in this email
bin/run-local job.yaml


Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


Thanks,
Fengguang
---
testcase: will-it-scale
default_monitors:
watch-oom:
wait: pre-test
uptime:
iostat:
vmstat:
numa-numastat:
numa-vmstat:
numa-meminfo:
proc-vmstat:
proc-stat:
meminfo:
slabinfo:
interrupts:
lock_stat:
latency_stats:
softirqs:
bdi_dev_mapping:
diskstats:
energy:
cpuidle:
cpufreq:
turbostat:
sched_debug:
interval: 10
pmeter:
model: Sandy Bridge-EP
memory: 32G
hdd_partitions: "/dev/sda2"
swap_partitions:
perf-profile:
freq: 800
will-it-scale:
test:
- open2
branch: linus/master
commit: 19583ca584d6f574384e17fe7613dfaeadcdc4a6
repeat_to: 3
enqueue_time: 2014-09-25 21:49:55.795426801 +08:00
testbox: lkp-snb01
kconfig: x86_64-rhel
kernel: "/kernel/x86_64-rhel/19583ca584d6f574384e17fe7613dfaeadcdc4a6/vmlinuz-3.16.0"
user: lkp
queue: wfg
result_root: "/result/lkp-snb01/will-it-scale/open2/debian-x86_64.cgz/x86_64-rhel/19583ca584d6f574384e17fe7613dfaeadcdc4a6/0"
job_file: "/lkp/scheduled/lkp-snb01/wfg_will-it-scale-open2-x86_64-rhel-19583ca584d6f574384e17fe7613dfaeadcdc4a6-2.yaml"
dequeue_time: 2014-09-28 03:05:58.335268061 +08:00
history_time: 300
job_state: finished
loadavg: 22.58 12.50 5.05 1/495 10394
start_time: '1411844801'
end_time: '1411845111'
version: "/lkp/lkp/.src-20140927-200334"
./runtest.py open2 25 1 8 16 24 32
_______________________________________________
LKP mailing list
LKP@xxxxxxxxxxxxxxx