Re: [LKP] [lkp] [sched/fair] 53d3bc773e: hackbench.throughput -32.9% regression

From: Huang\, Ying
Date: Tue May 31 2016 - 04:34:46 EST


Hi, Ingo,

Part of the regression has been recovered in v4.7-rc1 from -32.9% to
-9.8%. But there is still some regression. Is it possible for fully
restore it?

Details are as below.

Best Regards,
Huang, Ying


=========================================================================================
compiler/cpufreq_governor/ipc/kconfig/mode/nr_threads/rootfs/tbox_group/testcase:
gcc-4.9/performance/socket/x86_64-rhel/threads/50%/debian-x86_64-2015-02-07.cgz/ivb42/hackbench

commit:
c5114626f33b62fa7595e57d87f33d9d1f8298a2
53d3bc773eaa7ab1cf63585e76af7ee869d5e709
v4.7-rc1

c5114626f33b62fa 53d3bc773eaa7ab1cf63585e76 v4.7-rc1
---------------- -------------------------- --------------------------
%stddev %change %stddev %change %stddev
\ | \ | \
196590 Â 0% -32.9% 131963 Â 2% -9.8% 177231 Â 0% hackbench.throughput
602.66 Â 0% +2.8% 619.27 Â 2% +0.3% 604.66 Â 0% hackbench.time.elapsed_time
602.66 Â 0% +2.8% 619.27 Â 2% +0.3% 604.66 Â 0% hackbench.time.elapsed_time.max
1.76e+08 Â 3% +236.0% 5.914e+08 Â 2% -49.6% 88783232 Â 5% hackbench.time.involuntary_context_switches
208664 Â 2% +26.0% 262929 Â 3% +15.7% 241377 Â 0% hackbench.time.minor_page_faults
4401 Â 0% +5.7% 4650 Â 0% -8.1% 4043 Â 0% hackbench.time.percent_of_cpu_this_job_got
25256 Â 0% +10.2% 27842 Â 2% -7.7% 23311 Â 0% hackbench.time.system_time
1272 Â 0% -24.5% 961.37 Â 2% -10.4% 1140 Â 0% hackbench.time.user_time
7.64e+08 Â 1% +131.8% 1.771e+09 Â 2% -30.1% 5.339e+08 Â 2% hackbench.time.voluntary_context_switches
4051 Â 0% -39.9% 2434 Â 3% +57.8% 6393 Â 0% uptime.idle
4337715 Â 1% +7.3% 4654464 Â 2% -23.3% 3325346 Â 5% softirqs.RCU
2462880 Â 0% -35.6% 1585869 Â 5% +58.1% 3893988 Â 0% softirqs.SCHED
1766752 Â 1% +122.6% 3932589 Â 1% -25.6% 1313619 Â 1% vmstat.system.cs
249718 Â 2% +307.4% 1017398 Â 3% -40.4% 148723 Â 5% vmstat.system.in
1.76e+08 Â 3% +236.0% 5.914e+08 Â 2% -49.6% 88783232 Â 5% time.involuntary_context_switches
208664 Â 2% +26.0% 262929 Â 3% +15.7% 241377 Â 0% time.minor_page_faults
1272 Â 0% -24.5% 961.37 Â 2% -10.4% 1140 Â 0% time.user_time
7.64e+08 Â 1% +131.8% 1.771e+09 Â 2% -30.1% 5.339e+08 Â 2% time.voluntary_context_switches
177383 Â 0% +2.0% 180939 Â 0% -51.3% 86390 Â 1% meminfo.Active
102033 Â 0% -0.1% 101893 Â 1% -85.6% 14740 Â 0% meminfo.Active(file)
392558 Â 0% +0.0% 392612 Â 0% +22.6% 481411 Â 0% meminfo.Inactive
382911 Â 0% +0.0% 382923 Â 0% +23.2% 471792 Â 0% meminfo.Inactive(file)
143370 Â 0% -12.0% 126124 Â 1% -1.5% 141210 Â 0% meminfo.SUnreclaim
1136461 Â 3% +16.6% 1324662 Â 5% +15.9% 1316829 Â 1% numa-numastat.node0.local_node
1140216 Â 3% +16.2% 1324689 Â 5% +15.5% 1316830 Â 1% numa-numastat.node0.numa_hit
3755 Â 68% -99.3% 27.25 Â 94% -100.0% 1.25 Â 34% numa-numastat.node0.other_node
1098889 Â 4% +20.1% 1320211 Â 6% +16.4% 1278783 Â 1% numa-numastat.node1.local_node
1101996 Â 4% +20.5% 1327590 Â 6% +16.0% 1278783 Â 1% numa-numastat.node1.numa_hit
3106 Â 99% +137.5% 7379 Â 17% -100.0% 0.00 Â -1% numa-numastat.node1.other_node
7.18 Â 0% -50.2% 3.57 Â 43% +76.1% 12.64 Â 1% perf-profile.cycles-pp.call_cpuidle
8.09 Â 0% -44.7% 4.47 Â 38% +72.4% 13.95 Â 1% perf-profile.cycles-pp.cpu_startup_entry
7.17 Â 0% -50.3% 3.56 Â 43% +76.2% 12.63 Â 1% perf-profile.cycles-pp.cpuidle_enter
7.14 Â 0% -50.3% 3.55 Â 43% +76.1% 12.58 Â 1% perf-profile.cycles-pp.cpuidle_enter_state
7.11 Â 0% -50.6% 3.52 Â 43% +76.3% 12.54 Â 1% perf-profile.cycles-pp.intel_idle
8.00 Â 0% -44.5% 4.44 Â 38% +72.1% 13.77 Â 1% perf-profile.cycles-pp.start_secondary
92.32 Â 0% +5.4% 97.32 Â 0% -7.7% 85.26 Â 0% turbostat.%Busy
2763 Â 0% +5.4% 2912 Â 0% -7.7% 2551 Â 0% turbostat.Avg_MHz
7.48 Â 0% -66.5% 2.50 Â 7% +94.5% 14.54 Â 0% turbostat.CPU%c1
0.20 Â 2% -6.4% 0.18 Â 2% +2.6% 0.20 Â 3% turbostat.CPU%c6
180.03 Â 0% -1.3% 177.62 Â 0% -2.4% 175.63 Â 0% turbostat.CorWatt
209.86 Â 0% -0.8% 208.08 Â 0% -2.0% 205.64 Â 0% turbostat.PkgWatt
5.83 Â 0% +38.9% 8.10 Â 3% +12.7% 6.57 Â 1% turbostat.RAMWatt
1.658e+09 Â 0% -59.1% 6.784e+08 Â 7% +89.3% 3.138e+09 Â 0% cpuidle.C1-IVT.time
1.066e+08 Â 0% -40.3% 63661563 Â 6% +44.3% 1.539e+08 Â 0% cpuidle.C1-IVT.usage
26348635 Â 0% -86.8% 3471048 Â 15% +50.0% 39513523 Â 0% cpuidle.C1E-IVT.time
291620 Â 0% -85.1% 43352 Â 15% +28.8% 375730 Â 1% cpuidle.C1E-IVT.usage
54158643 Â 1% -88.5% 6254009 Â 14% +78.4% 96596486 Â 1% cpuidle.C3-IVT.time
482437 Â 1% -87.0% 62620 Â 16% +45.6% 702258 Â 1% cpuidle.C3-IVT.usage
5.028e+08 Â 0% -75.8% 1.219e+08 Â 8% +85.5% 9.327e+08 Â 1% cpuidle.C6-IVT.time
3805026 Â 0% -85.5% 552326 Â 16% +49.4% 5684182 Â 1% cpuidle.C6-IVT.usage
2766 Â 4% -51.4% 1344 Â 6% +10.0% 3042 Â 7% cpuidle.POLL.usage
49725 Â 4% +2.1% 50775 Â 3% -85.2% 7360 Â 0% numa-meminfo.node0.Active(file)
2228 Â 92% +137.1% 5285 Â 15% +118.7% 4874 Â 19% numa-meminfo.node0.AnonHugePages
197699 Â 2% +1.6% 200772 Â 0% +23.9% 245042 Â 0% numa-meminfo.node0.Inactive
192790 Â 1% -0.6% 191611 Â 0% +22.3% 235849 Â 0% numa-meminfo.node0.Inactive(file)
73589 Â 4% -12.5% 64393 Â 2% -1.3% 72664 Â 2% numa-meminfo.node0.SUnreclaim
27438 Â 83% +102.6% 55585 Â 6% +83.0% 50223 Â 0% numa-meminfo.node0.Shmem
101051 Â 3% -10.9% 90044 Â 2% -1.2% 99863 Â 2% numa-meminfo.node0.Slab
89204 Â 25% -25.3% 66594 Â 4% -77.6% 19954 Â 4% numa-meminfo.node1.Active
52306 Â 3% -2.3% 51117 Â 4% -85.9% 7380 Â 0% numa-meminfo.node1.Active(file)
194864 Â 2% -1.6% 191824 Â 1% +21.3% 236372 Â 0% numa-meminfo.node1.Inactive
4742 Â 86% -89.2% 511.75 Â 41% -90.9% 430.00 Â 60% numa-meminfo.node1.Inactive(anon)
190121 Â 1% +0.6% 191311 Â 1% +24.1% 235942 Â 0% numa-meminfo.node1.Inactive(file)
69844 Â 4% -11.8% 61579 Â 3% -1.9% 68521 Â 3% numa-meminfo.node1.SUnreclaim
12430 Â 4% +2.1% 12693 Â 3% -85.2% 1839 Â 0% numa-vmstat.node0.nr_active_file
48197 Â 1% -0.6% 47902 Â 0% +22.3% 58962 Â 0% numa-vmstat.node0.nr_inactive_file
6857 Â 83% +102.8% 13905 Â 6% +83.1% 12559 Â 0% numa-vmstat.node0.nr_shmem
18395 Â 4% -12.4% 16121 Â 2% -1.1% 18187 Â 2% numa-vmstat.node0.nr_slab_unreclaimable
675569 Â 3% +12.7% 761135 Â 4% +18.8% 802726 Â 4% numa-vmstat.node0.numa_local
71537 Â 5% -7.9% 65920 Â 2% -100.0% 0.25 Â173% numa-vmstat.node0.numa_other
13076 Â 3% -2.3% 12778 Â 4% -85.9% 1844 Â 0% numa-vmstat.node1.nr_active_file
1187 Â 86% -89.3% 127.50 Â 41% -91.0% 107.25 Â 60% numa-vmstat.node1.nr_inactive_anon
47530 Â 1% +0.6% 47827 Â 1% +24.1% 58985 Â 0% numa-vmstat.node1.nr_inactive_file
17456 Â 4% -11.7% 15405 Â 3% -1.9% 17127 Â 3% numa-vmstat.node1.nr_slab_unreclaimable
695848 Â 3% +14.9% 799683 Â 5% +4.7% 728368 Â 3% numa-vmstat.node1.numa_hit
677405 Â 4% +14.5% 775903 Â 6% +7.5% 728368 Â 3% numa-vmstat.node1.numa_local
18442 Â 19% +28.9% 23779 Â 5% -100.0% 0.00 Â -1% numa-vmstat.node1.numa_other
25508 Â 0% -0.1% 25473 Â 1% -85.6% 3684 Â 0% proc-vmstat.nr_active_file
95727 Â 0% +0.0% 95730 Â 0% +23.2% 117947 Â 0% proc-vmstat.nr_inactive_file
35841 Â 0% -12.0% 31543 Â 0% -1.5% 35298 Â 0% proc-vmstat.nr_slab_unreclaimable
154090 Â 2% +43.1% 220509 Â 3% +23.5% 190284 Â 0% proc-vmstat.numa_hint_faults
129240 Â 2% +47.4% 190543 Â 3% +15.1% 148733 Â 1% proc-vmstat.numa_hint_faults_local
2238386 Â 1% +18.4% 2649737 Â 2% +15.8% 2591197 Â 0% proc-vmstat.numa_hit
2232163 Â 1% +18.4% 2643105 Â 2% +16.1% 2591195 Â 0% proc-vmstat.numa_local
6223 Â 0% +6.6% 6632 Â 10% -100.0% 1.25 Â 34% proc-vmstat.numa_other
22315 Â 1% -21.0% 17625 Â 5% -0.4% 22234 Â 0% proc-vmstat.numa_pages_migrated
154533 Â 2% +45.6% 225071 Â 3% +25.7% 194235 Â 0% proc-vmstat.numa_pte_updates
14224 Â 0% +5.5% 15006 Â 3% -17.8% 11689 Â 0% proc-vmstat.pgactivate
382980 Â 2% +33.2% 510157 Â 4% +22.0% 467358 Â 0% proc-vmstat.pgalloc_dma32
7311738 Â 2% +37.2% 10029060 Â 2% +28.2% 9374740 Â 0% proc-vmstat.pgalloc_normal
7672040 Â 2% +37.1% 10519738 Â 2% +28.0% 9823026 Â 0% proc-vmstat.pgfree
22315 Â 1% -21.0% 17625 Â 5% -0.4% 22234 Â 0% proc-vmstat.pgmigrate_success
720.75 Â 3% -11.3% 639.50 Â 1% -29.2% 510.00 Â 0% slabinfo.RAW.active_objs
720.75 Â 3% -11.3% 639.50 Â 1% -29.2% 510.00 Â 0% slabinfo.RAW.num_objs
5487 Â 6% -12.6% 4797 Â 4% -100.0% 0.00 Â -1% slabinfo.UNIX.active_objs
164.50 Â 5% -12.3% 144.25 Â 4% -100.0% 0.00 Â -1% slabinfo.UNIX.active_slabs
5609 Â 5% -12.2% 4926 Â 4% -100.0% 0.00 Â -1% slabinfo.UNIX.num_objs
164.50 Â 5% -12.3% 144.25 Â 4% -100.0% 0.00 Â -1% slabinfo.UNIX.num_slabs
4362 Â 4% +14.6% 4998 Â 2% -3.2% 4223 Â 4% slabinfo.cred_jar.active_objs
4362 Â 4% +14.6% 4998 Â 2% -3.2% 4223 Â 4% slabinfo.cred_jar.num_objs
2904 Â 4% -2.7% 2825 Â 1% +56.5% 4545 Â 2% slabinfo.kmalloc-1024.active_objs
2935 Â 2% -0.5% 2920 Â 1% +57.8% 4633 Â 2% slabinfo.kmalloc-1024.num_objs
42525 Â 0% -41.6% 24824 Â 3% +7.3% 45621 Â 0% slabinfo.kmalloc-256.active_objs
845.50 Â 0% -42.9% 482.50 Â 3% +3.0% 870.50 Â 0% slabinfo.kmalloc-256.active_slabs
54124 Â 0% -42.9% 30920 Â 3% +3.0% 55755 Â 0% slabinfo.kmalloc-256.num_objs
845.50 Â 0% -42.9% 482.50 Â 3% +3.0% 870.50 Â 0% slabinfo.kmalloc-256.num_slabs
47204 Â 0% -37.9% 29335 Â 2% +6.6% 50334 Â 0% slabinfo.kmalloc-512.active_objs
915.25 Â 0% -39.8% 551.00 Â 3% +2.8% 940.50 Â 0% slabinfo.kmalloc-512.active_slabs
58599 Â 0% -39.8% 35300 Â 3% +2.8% 60224 Â 0% slabinfo.kmalloc-512.num_objs
915.25 Â 0% -39.8% 551.00 Â 3% +2.8% 940.50 Â 0% slabinfo.kmalloc-512.num_slabs
12443 Â 2% -20.1% 9944 Â 3% -6.5% 11639 Â 1% slabinfo.pid.active_objs
12443 Â 2% -20.1% 9944 Â 3% -6.5% 11639 Â 1% slabinfo.pid.num_objs
440.00 Â 5% -32.8% 295.75 Â 4% -11.7% 388.50 Â 7% slabinfo.taskstats.active_objs
440.00 Â 5% -32.8% 295.75 Â 4% -11.7% 388.50 Â 7% slabinfo.taskstats.num_objs
188235 Â 74% +62.9% 306699 Â 27% -98.6% 2627 Â 40% sched_debug.cfs_rq:/.MIN_vruntime.avg
7146629 Â 80% +27.7% 9122933 Â 36% -98.6% 98261 Â 36% sched_debug.cfs_rq:/.MIN_vruntime.max
1117852 Â 77% +44.7% 1617052 Â 31% -98.6% 15548 Â 37% sched_debug.cfs_rq:/.MIN_vruntime.stddev
61.52 Â116% -70.6% 18.11 Â 6% +1.2e+06% 718736 Â 1% sched_debug.cfs_rq:/.load.avg
2144 Â161% -96.3% 79.41 Â 48% +49309.2% 1059411 Â 3% sched_debug.cfs_rq:/.load.max
312.45 Â157% -94.8% 16.29 Â 33% +1.1e+05% 333106 Â 5% sched_debug.cfs_rq:/.load.stddev
20.46 Â 4% +9.0% 22.31 Â 6% +3004.0% 635.15 Â 1% sched_debug.cfs_rq:/.load_avg.avg
81.57 Â 32% +14.2% 93.18 Â 26% +1035.5% 926.18 Â 3% sched_debug.cfs_rq:/.load_avg.max
8.14 Â 5% -2.8% 7.91 Â 3% +2585.8% 218.52 Â 13% sched_debug.cfs_rq:/.load_avg.min
13.90 Â 29% +16.9% 16.25 Â 22% +1089.3% 165.34 Â 5% sched_debug.cfs_rq:/.load_avg.stddev
188235 Â 74% +62.9% 306699 Â 27% -98.6% 2627 Â 40% sched_debug.cfs_rq:/.max_vruntime.avg
7146629 Â 80% +27.7% 9122933 Â 36% -98.6% 98261 Â 36% sched_debug.cfs_rq:/.max_vruntime.max
1117852 Â 77% +44.7% 1617052 Â 31% -98.6% 15548 Â 37% sched_debug.cfs_rq:/.max_vruntime.stddev
29491781 Â 0% -4.8% 28074842 Â 1% -99.0% 295426 Â 0% sched_debug.cfs_rq:/.min_vruntime.avg
31241540 Â 0% -5.8% 29418054 Â 0% -99.0% 320734 Â 0% sched_debug.cfs_rq:/.min_vruntime.max
27849652 Â 0% -3.7% 26821072 Â 2% -99.0% 275550 Â 0% sched_debug.cfs_rq:/.min_vruntime.min
861989 Â 3% -20.2% 687639 Â 22% -98.3% 14586 Â 2% sched_debug.cfs_rq:/.min_vruntime.stddev
0.27 Â 5% -56.3% 0.12 Â 30% +27.5% 0.34 Â 6% sched_debug.cfs_rq:/.nr_running.stddev
16.51 Â 1% +9.5% 18.08 Â 3% +3343.1% 568.61 Â 2% sched_debug.cfs_rq:/.runnable_load_avg.avg
34.80 Â 13% +15.0% 40.02 Â 19% +2514.0% 909.57 Â 0% sched_debug.cfs_rq:/.runnable_load_avg.max
0.05 Â100% +7950.0% 3.66 Â 48% +3250.0% 1.52 Â 89% sched_debug.cfs_rq:/.runnable_load_avg.min
7.18 Â 9% -0.1% 7.18 Â 13% +3571.2% 263.68 Â 4% sched_debug.cfs_rq:/.runnable_load_avg.stddev
-740916 Â-28% -158.5% 433310 Â120% -96.8% -23579 Â -5% sched_debug.cfs_rq:/.spread0.avg
1009940 Â 19% +75.8% 1775442 Â 30% -99.8% 1736 Â164% sched_debug.cfs_rq:/.spread0.max
-2384171 Â -7% -65.7% -818684 Â-76% -98.2% -43456 Â -4% sched_debug.cfs_rq:/.spread0.min
862765 Â 3% -20.4% 686825 Â 22% -98.3% 14591 Â 2% sched_debug.cfs_rq:/.spread0.stddev
749.14 Â 1% +13.0% 846.34 Â 1% -41.1% 441.05 Â 5% sched_debug.cfs_rq:/.util_avg.min
51.66 Â 4% -36.3% 32.92 Â 5% +150.6% 129.46 Â 6% sched_debug.cfs_rq:/.util_avg.stddev
161202 Â 7% -41.7% 93997 Â 4% +147.7% 399342 Â 1% sched_debug.cpu.avg_idle.avg
595158 Â 6% -51.2% 290491 Â 22% +37.8% 820120 Â 0% sched_debug.cpu.avg_idle.max
7658 Â 51% +9.2% 8366 Â 26% +114.4% 16423 Â 31% sched_debug.cpu.avg_idle.min
132760 Â 8% -58.8% 54718 Â 19% +97.8% 262608 Â 0% sched_debug.cpu.avg_idle.stddev
11.40 Â 11% +111.0% 24.05 Â 16% -58.1% 4.78 Â 3% sched_debug.cpu.clock.stddev
11.40 Â 11% +111.0% 24.05 Â 16% -58.1% 4.78 Â 3% sched_debug.cpu.clock_task.stddev
16.59 Â 1% +7.7% 17.86 Â 2% +3099.8% 530.73 Â 2% sched_debug.cpu.cpu_load[0].avg
32.34 Â 2% +23.9% 40.07 Â 19% +2715.0% 910.41 Â 0% sched_debug.cpu.cpu_load[0].max
0.34 Â103% +520.0% 2.11 Â 67% +140.0% 0.82 Â110% sched_debug.cpu.cpu_load[0].min
6.87 Â 3% +8.0% 7.42 Â 13% +4228.9% 297.50 Â 3% sched_debug.cpu.cpu_load[0].stddev
16.56 Â 0% +8.1% 17.91 Â 2% +3703.9% 630.04 Â 1% sched_debug.cpu.cpu_load[1].avg
32.18 Â 2% +22.7% 39.50 Â 17% +2728.5% 910.25 Â 0% sched_debug.cpu.cpu_load[1].max
3.32 Â 8% +84.9% 6.14 Â 12% +5364.4% 181.32 Â 9% sched_debug.cpu.cpu_load[1].min
6.14 Â 5% +12.5% 6.91 Â 13% +2708.6% 172.56 Â 5% sched_debug.cpu.cpu_load[1].stddev
16.75 Â 1% +7.6% 18.02 Â 2% +3646.9% 627.69 Â 1% sched_debug.cpu.cpu_load[2].avg
33.25 Â 7% +16.5% 38.75 Â 14% +2634.1% 909.09 Â 0% sched_debug.cpu.cpu_load[2].max
5.39 Â 7% +36.3% 7.34 Â 4% +3547.3% 196.45 Â 11% sched_debug.cpu.cpu_load[2].min
5.95 Â 9% +11.8% 6.65 Â 11% +2752.1% 169.73 Â 5% sched_debug.cpu.cpu_load[2].stddev
17.17 Â 1% +6.1% 18.22 Â 2% +3552.1% 626.96 Â 1% sched_debug.cpu.cpu_load[3].avg
33.20 Â 7% +14.6% 38.05 Â 9% +2631.3% 906.93 Â 0% sched_debug.cpu.cpu_load[3].max
6.93 Â 7% +10.5% 7.66 Â 1% +2766.9% 198.73 Â 11% sched_debug.cpu.cpu_load[3].min
5.70 Â 9% +13.9% 6.49 Â 8% +2825.6% 166.73 Â 5% sched_debug.cpu.cpu_load[3].stddev
17.49 Â 0% +4.9% 18.36 Â 2% +3482.1% 626.64 Â 1% sched_debug.cpu.cpu_load[4].avg
33.18 Â 3% +14.0% 37.82 Â 5% +2615.8% 901.16 Â 0% sched_debug.cpu.cpu_load[4].max
7.66 Â 8% +0.9% 7.73 Â 1% +2568.8% 204.41 Â 11% sched_debug.cpu.cpu_load[4].min
5.56 Â 6% +16.2% 6.45 Â 6% +2814.9% 161.96 Â 6% sched_debug.cpu.cpu_load[4].stddev
16741 Â 0% -15.4% 14166 Â 2% -13.0% 14564 Â 2% sched_debug.cpu.curr->pid.avg
19196 Â 0% -18.3% 15690 Â 1% -4.9% 18255 Â 0% sched_debug.cpu.curr->pid.max
5174 Â 5% -55.4% 2305 Â 14% +19.3% 6173 Â 6% sched_debug.cpu.curr->pid.stddev
18.60 Â 5% -2.7% 18.10 Â 6% +3.9e+06% 717646 Â 2% sched_debug.cpu.load.avg
81.23 Â 48% -2.4% 79.30 Â 47% +1.3e+06% 1059340 Â 3% sched_debug.cpu.load.max
18.01 Â 28% -9.4% 16.32 Â 33% +1.9e+06% 333436 Â 5% sched_debug.cpu.load.stddev
0.00 Â 2% +29.8% 0.00 Â 33% +39.0% 0.00 Â 15% sched_debug.cpu.next_balance.stddev
1410 Â 1% -14.2% 1210 Â 6% +34.5% 1896 Â 1% sched_debug.cpu.nr_load_updates.stddev
9.95 Â 3% -14.5% 8.51 Â 5% -1.2% 9.83 Â 2% sched_debug.cpu.nr_running.avg
29.07 Â 2% -15.0% 24.70 Â 4% +37.5% 39.98 Â 1% sched_debug.cpu.nr_running.max
0.05 Â100% +850.0% 0.43 Â 37% -100.0% 0.00 Â -1% sched_debug.cpu.nr_running.min
7.64 Â 3% -23.0% 5.88 Â 2% +48.6% 11.36 Â 2% sched_debug.cpu.nr_running.stddev
10979930 Â 1% +123.3% 24518490 Â 2% -26.3% 8091669 Â 1% sched_debug.cpu.nr_switches.avg
12350130 Â 1% +117.5% 26856375 Â 2% -17.0% 10249081 Â 2% sched_debug.cpu.nr_switches.max
9594835 Â 2% +132.6% 22314436 Â 2% -31.0% 6620975 Â 2% sched_debug.cpu.nr_switches.min
769296 Â 1% +56.8% 1206190 Â 3% +54.6% 1189172 Â 1% sched_debug.cpu.nr_switches.stddev
8.30 Â 18% +32.9% 11.02 Â 15% +113.7% 17.73 Â 26% sched_debug.cpu.nr_uninterruptible.max
4.87 Â 15% +14.3% 5.57 Â 6% +97.2% 9.61 Â 29% sched_debug.cpu.nr_uninterruptible.stddev