Re: [LKP] [lkp] [sched/fair] 53d3bc773e: hackbench.throughput -32.9% regression
From: Huang\, Ying
Date: Wed Jun 01 2016 - 01:00:22 EST
Hi, Peter,
Peter Zijlstra <peterz@xxxxxxxxxxxxx> writes:
> On Tue, May 31, 2016 at 04:34:36PM +0800, Huang, Ying wrote:
>> Hi, Ingo,
>>
>> Part of the regression has been recovered in v4.7-rc1 from -32.9% to
>> -9.8%. But there is still some regression. Is it possible for fully
>> restore it?
>
> after much searching on how you guys run hackbench... I figured
> something like:
>
> perf bench sched messaging -g 20 --thread -l 60000
There is a reproduce file attached in the original report email, its
contents is something like below:
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu1/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu10/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu11/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu12/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu13/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu14/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu15/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu16/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu17/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu18/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu19/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu2/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu20/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu21/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu22/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu23/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu24/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu25/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu26/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu27/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu28/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu29/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu3/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu30/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu31/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu32/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu33/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu34/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu35/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu36/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu37/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu38/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu39/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu4/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu40/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu41/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu42/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu43/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu44/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu45/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu46/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu47/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu5/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu6/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu7/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu8/cpufreq/scaling_governor
2016-05-15 08:57:02 echo performance > /sys/devices/system/cpu/cpu9/cpufreq/scaling_governor
2016-05-15 08:57:03 /usr/bin/hackbench -g 24 --threads -l 60000
2016-05-15 08:57:50 /usr/bin/hackbench -g 24 --threads -l 60000
2016-05-15 08:58:33 /usr/bin/hackbench -g 24 --threads -l 60000
2016-05-15 08:59:15 /usr/bin/hackbench -g 24 --threads -l 60000
2016-05-15 08:59:58 /usr/bin/hackbench -g 24 --threads -l 60000
2016-05-15 09:00:43 /usr/bin/hackbench -g 24 --threads -l 60000
2016-05-15 09:01:22 /usr/bin/hackbench -g 24 --threads -l 60000
2016-05-15 09:01:57 /usr/bin/hackbench -g 24 --threads -l 60000
2016-05-15 09:02:39 /usr/bin/hackbench -g 24 --threads -l 60000
2016-05-15 09:03:22 /usr/bin/hackbench -g 24 --threads -l 60000
2016-05-15 09:04:10 /usr/bin/hackbench -g 24 --threads -l 60000
2016-05-15 09:04:53 /usr/bin/hackbench -g 24 --threads -l 60000
2016-05-15 09:05:39 /usr/bin/hackbench -g 24 --threads -l 60000
2016-05-15 09:06:24 /usr/bin/hackbench -g 24 --threads -l 60000
Hope that will help you for reproduce.
> on my IVB-EP (2*10*2) is similar to your IVT thing.
>
> And running something like:
>
> for i in /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor ; do echo performance > $i ; done
> perf stat --null --repeat 10 -- perf bench sched messaging -g 20 --thread -l 60000 | grep "seconds time elapsed"
>
> gets me:
>
> v4.6:
>
> 36.786914089 seconds time elapsed ( +- 0.49% )
> 37.054017355 seconds time elapsed ( +- 1.05% )
>
>
> origin/master (v4.7-rc1-ish):
>
> 34.757435264 seconds time elapsed ( +- 3.34% )
> 35.396252515 seconds time elapsed ( +- 3.38% )
>
>
> Which doesn't show a regression between v4.6 and HEAD; in fact it shows
> an improvement.
Yes. For hackbench test, linus/master (v4.7-rc1+) is better than v4.6,
but it is worse than v4.6-rc7. Details is as below.
=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/cpufreq_governor/nr_threads/mode/ipc:
ivb42/hackbench/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/performance/50%/threads/socket
commit:
v4.6-rc7
v4.6
367d3fd50566a313946fa9c5b2116a81bf3807e4
v4.6-rc7 v4.6 367d3fd50566a313946fa9c5b2
---------------- -------------------------- --------------------------
%stddev %change %stddev %change %stddev
\ | \ | \
198307 Â 0% -33.4% 132165 Â 3% -11.8% 174857 Â 0% hackbench.throughput
625.91 Â 0% -2.0% 613.12 Â 1% -2.1% 612.85 Â 0% hackbench.time.elapsed_time
625.91 Â 0% -2.0% 613.12 Â 1% -2.1% 612.85 Â 0% hackbench.time.elapsed_time.max
1.611e+08 Â 0% +254.7% 5.712e+08 Â 4% -25.3% 1.203e+08 Â 5% hackbench.time.involuntary_context_switches
212287 Â 2% +22.3% 259622 Â 4% +33.0% 282261 Â 1% hackbench.time.minor_page_faults
4391 Â 0% +5.7% 4643 Â 0% -6.9% 4090 Â 0% hackbench.time.percent_of_cpu_this_job_got
26154 Â 0% +5.2% 27509 Â 1% -8.5% 23935 Â 0% hackbench.time.system_time
1336 Â 0% -28.1% 961.07 Â 2% -14.8% 1138 Â 0% hackbench.time.user_time
7.442e+08 Â 0% +129.6% 1.709e+09 Â 4% -17.5% 6.139e+08 Â 2% hackbench.time.voluntary_context_switches
4157 Â 1% -39.0% 2536 Â 15% +44.6% 6011 Â 2% uptime.idle
1656569 Â 0% +131.8% 3840033 Â 3% -10.2% 1486840 Â 2% vmstat.system.cs
225682 Â 0% +335.2% 982245 Â 5% -4.2% 216300 Â 7% vmstat.system.in
4416560 Â 3% +7.3% 4737257 Â 2% -18.1% 3617836 Â 1% softirqs.RCU
2591680 Â 0% -37.9% 1608431 Â 7% +47.9% 3833673 Â 0% softirqs.SCHED
13948275 Â 0% +3.3% 14406201 Â 1% -8.9% 12703887 Â 0% softirqs.TIMER
1.611e+08 Â 0% +254.7% 5.712e+08 Â 4% -25.3% 1.203e+08 Â 5% time.involuntary_context_switches
212287 Â 2% +22.3% 259622 Â 4% +33.0% 282261 Â 1% time.minor_page_faults
1336 Â 0% -28.1% 961.07 Â 2% -14.8% 1138 Â 0% time.user_time
7.442e+08 Â 0% +129.6% 1.709e+09 Â 4% -17.5% 6.139e+08 Â 2% time.voluntary_context_switches
176970 Â 1% +2.4% 181276 Â 0% -51.5% 85865 Â 0% meminfo.Active
101149 Â 2% +0.4% 101589 Â 1% -85.4% 14807 Â 0% meminfo.Active(file)
390916 Â 0% +1.1% 395022 Â 0% +23.2% 481664 Â 0% meminfo.Inactive
381267 Â 0% +1.1% 385296 Â 0% +23.8% 472035 Â 0% meminfo.Inactive(file)
143716 Â 0% -12.4% 125923 Â 1% -2.4% 140230 Â 0% meminfo.SUnreclaim
194906 Â 0% -8.9% 177650 Â 1% -1.8% 191478 Â 0% meminfo.Slab
1162151 Â 6% +11.4% 1294775 Â 2% +17.5% 1365360 Â 1% numa-numastat.node0.local_node
1163400 Â 6% +11.5% 1297646 Â 2% +17.4% 1365361 Â 1% numa-numastat.node0.numa_hit
1249 Â197% +129.8% 2871 Â 95% -99.9% 0.67 Â 70% numa-numastat.node0.other_node
1084104 Â 6% +15.1% 1247352 Â 4% +22.0% 1323149 Â 1% numa-numastat.node1.local_node
1089973 Â 6% +14.9% 1252683 Â 4% +21.4% 1323149 Â 1% numa-numastat.node1.numa_hit
5868 Â 40% -9.2% 5330 Â 70% -100.0% 0.33 Â141% numa-numastat.node1.other_node
92.11 Â 0% +5.5% 97.16 Â 0% -6.3% 86.33 Â 0% turbostat.%Busy
2756 Â 0% +5.5% 2907 Â 0% -6.3% 2584 Â 0% turbostat.Avg_MHz
7.70 Â 0% -65.6% 2.64 Â 12% +74.9% 13.46 Â 2% turbostat.CPU%c1
180.27 Â 0% -1.6% 177.34 Â 0% -2.5% 175.80 Â 0% turbostat.CorWatt
210.07 Â 0% -1.1% 207.71 Â 0% -1.9% 206.01 Â 0% turbostat.PkgWatt
5.81 Â 0% +35.8% 7.88 Â 3% +24.2% 7.21 Â 2% turbostat.RAMWatt
102504 Â 20% -5.6% 96726 Â 25% -65.7% 35129 Â 52% numa-meminfo.node0.Active
50026 Â 2% +2.3% 51197 Â 4% -85.2% 7408 Â 0% numa-meminfo.node0.Active(file)
198553 Â 2% +0.4% 199265 Â 3% +22.0% 242211 Â 1% numa-meminfo.node0.Inactive
191148 Â 1% +1.7% 194350 Â 3% +23.5% 235978 Â 0% numa-meminfo.node0.Inactive(file)
74572 Â 8% -11.8% 65807 Â 5% -4.4% 71257 Â 3% numa-meminfo.node0.SUnreclaim
51121 Â 5% -1.4% 50391 Â 4% -85.5% 7398 Â 0% numa-meminfo.node1.Active(file)
192353 Â 1% +1.8% 195730 Â 2% +24.5% 239430 Â 1% numa-meminfo.node1.Inactive
190119 Â 0% +0.4% 190946 Â 1% +24.2% 236055 Â 0% numa-meminfo.node1.Inactive(file)
472112 Â 5% +3.0% 486190 Â 5% +8.2% 510902 Â 4% numa-meminfo.node1.MemUsed
12506 Â 2% +2.3% 12799 Â 4% -85.2% 1852 Â 0% numa-vmstat.node0.nr_active_file
47786 Â 1% +1.7% 48587 Â 3% +23.5% 58994 Â 0% numa-vmstat.node0.nr_inactive_file
18626 Â 8% -11.7% 16446 Â 5% -4.4% 17801 Â 3% numa-vmstat.node0.nr_slab_unreclaimable
66037 Â 3% +3.1% 68095 Â 4% -100.0% 0.00 Â 0% numa-vmstat.node0.numa_other
12780 Â 5% -1.4% 12597 Â 4% -85.5% 1849 Â 0% numa-vmstat.node1.nr_active_file
47529 Â 0% +0.4% 47735 Â 1% +24.2% 59013 Â 0% numa-vmstat.node1.nr_inactive_file
698206 Â 5% +11.3% 777438 Â 4% +17.6% 820805 Â 2% numa-vmstat.node1.numa_hit
674672 Â 6% +12.0% 755944 Â 4% +21.7% 820805 Â 2% numa-vmstat.node1.numa_local
23532 Â 10% -8.7% 21493 Â 15% -100.0% 0.00 Â 0% numa-vmstat.node1.numa_other
1.766e+09 Â 0% -60.1% 7.057e+08 Â 11% +70.1% 3.004e+09 Â 1% cpuidle.C1-IVT.time
1.125e+08 Â 0% -41.9% 65415380 Â 10% +38.6% 1.559e+08 Â 0% cpuidle.C1-IVT.usage
28400387 Â 1% -86.0% 3980259 Â 24% +21.9% 34611888 Â 3% cpuidle.C1E-IVT.time
308989 Â 0% -84.5% 47825 Â 23% +10.1% 340115 Â 3% cpuidle.C1E-IVT.usage
58891432 Â 0% -88.2% 6936400 Â 20% +36.2% 80209704 Â 4% cpuidle.C3-IVT.time
521047 Â 0% -86.5% 70085 Â 22% +16.6% 607626 Â 3% cpuidle.C3-IVT.usage
5.375e+08 Â 0% -75.8% 1.298e+08 Â 11% +55.6% 8.366e+08 Â 2% cpuidle.C6-IVT.time
4062211 Â 0% -85.1% 603908 Â 22% +28.0% 5200129 Â 2% cpuidle.C6-IVT.usage
15694 Â 6% +386.2% 76300 Â145% +774.3% 137212 Â 62% cpuidle.POLL.time
2751 Â 3% -52.5% 1308 Â 18% +15.4% 3176 Â 2% cpuidle.POLL.usage
25287 Â 2% +0.4% 25397 Â 1% -85.4% 3701 Â 0% proc-vmstat.nr_active_file
95316 Â 0% +1.1% 96323 Â 0% +23.8% 118008 Â 0% proc-vmstat.nr_inactive_file
35930 Â 0% -12.3% 31511 Â 1% -2.5% 35048 Â 0% proc-vmstat.nr_slab_unreclaimable
154964 Â 3% +40.6% 217915 Â 5% +48.6% 230354 Â 2% proc-vmstat.numa_hint_faults
128683 Â 4% +46.4% 188443 Â 5% +45.5% 187179 Â 2% proc-vmstat.numa_hint_faults_local
2247802 Â 0% +13.2% 2544572 Â 2% +19.5% 2685990 Â 0% proc-vmstat.numa_hit
2241597 Â 0% +13.2% 2537511 Â 2% +19.8% 2685989 Â 0% proc-vmstat.numa_local
6205 Â 0% +13.8% 7060 Â 18% -100.0% 1.00 Â 0% proc-vmstat.numa_other
23151 Â 1% -25.8% 17189 Â 4% -1.7% 22762 Â 0% proc-vmstat.numa_pages_migrated
155763 Â 3% +43.4% 223408 Â 5% +49.7% 233247 Â 2% proc-vmstat.numa_pte_updates
14010 Â 1% +16.3% 16287 Â 7% -17.1% 11610 Â 0% proc-vmstat.pgactivate
373910 Â 2% +28.4% 479928 Â 4% +30.1% 486506 Â 1% proc-vmstat.pgalloc_dma32
7157922 Â 1% +30.9% 9370533 Â 2% +38.0% 9878095 Â 0% proc-vmstat.pgalloc_normal
7509133 Â 1% +30.9% 9827974 Â 2% +37.8% 10345598 Â 0% proc-vmstat.pgfree
23151 Â 1% -25.8% 17189 Â 4% -1.7% 22762 Â 0% proc-vmstat.pgmigrate_success
737.40 Â 4% -10.3% 661.25 Â 3% -30.8% 510.00 Â 0% slabinfo.RAW.active_objs
737.40 Â 4% -10.3% 661.25 Â 3% -30.8% 510.00 Â 0% slabinfo.RAW.num_objs
5762 Â 6% -19.2% 4653 Â 3% -100.0% 0.00 Â -1% slabinfo.UNIX.active_objs
172.60 Â 6% -18.9% 140.00 Â 3% -100.0% 0.00 Â -1% slabinfo.UNIX.active_slabs
5892 Â 6% -19.0% 4775 Â 3% -100.0% 0.00 Â -1% slabinfo.UNIX.num_objs
172.60 Â 6% -18.9% 140.00 Â 3% -100.0% 0.00 Â -1% slabinfo.UNIX.num_slabs
37256 Â 3% -8.7% 34010 Â 3% +1.6% 37863 Â 0% slabinfo.anon_vma_chain.active_objs
37401 Â 3% -8.8% 34094 Â 3% +1.5% 37948 Â 0% slabinfo.anon_vma_chain.num_objs
4509 Â 1% +13.8% 5130 Â 9% +8.3% 4885 Â 15% slabinfo.cred_jar.active_objs
4509 Â 1% +13.8% 5130 Â 9% +8.3% 4885 Â 15% slabinfo.cred_jar.num_objs
2783 Â 2% +3.4% 2877 Â 4% +54.3% 4295 Â 0% slabinfo.kmalloc-1024.active_objs
2858 Â 1% +2.6% 2932 Â 3% +53.4% 4385 Â 0% slabinfo.kmalloc-1024.num_objs
25441 Â 1% -10.1% 22884 Â 1% -3.8% 24477 Â 2% slabinfo.kmalloc-16.active_objs
25441 Â 1% -10.1% 22884 Â 1% -3.8% 24477 Â 2% slabinfo.kmalloc-16.num_objs
43013 Â 0% -41.4% 25205 Â 5% +3.1% 44366 Â 1% slabinfo.kmalloc-256.active_objs
854.60 Â 0% -42.0% 495.25 Â 5% -1.0% 846.00 Â 0% slabinfo.kmalloc-256.active_slabs
54719 Â 0% -42.0% 31735 Â 5% -1.0% 54189 Â 0% slabinfo.kmalloc-256.num_objs
854.60 Â 0% -42.0% 495.25 Â 5% -1.0% 846.00 Â 0% slabinfo.kmalloc-256.num_slabs
47683 Â 0% -37.7% 29715 Â 4% +2.9% 49067 Â 0% slabinfo.kmalloc-512.active_objs
924.00 Â 0% -39.0% 563.75 Â 4% -0.9% 916.00 Â 0% slabinfo.kmalloc-512.active_slabs
59169 Â 0% -39.0% 36109 Â 4% -0.8% 58667 Â 0% slabinfo.kmalloc-512.num_objs
924.00 Â 0% -39.0% 563.75 Â 4% -0.9% 916.00 Â 0% slabinfo.kmalloc-512.num_slabs
8287 Â 2% +2.8% 8521 Â 4% +12.6% 9335 Â 2% slabinfo.kmalloc-96.active_objs
8351 Â 3% +2.6% 8570 Â 4% +12.7% 9409 Â 2% slabinfo.kmalloc-96.num_objs
12776 Â 1% -22.2% 9944 Â 2% -6.8% 11906 Â 1% slabinfo.pid.active_objs
12776 Â 1% -22.2% 9944 Â 2% -6.8% 11906 Â 1% slabinfo.pid.num_objs
5708 Â 2% -10.0% 5139 Â 3% -6.2% 5355 Â 0% slabinfo.sock_inode_cache.active_objs
5902 Â 2% -9.8% 5326 Â 3% -5.9% 5552 Â 0% slabinfo.sock_inode_cache.num_objs
447.40 Â 6% -35.7% 287.50 Â 6% -7.7% 413.00 Â 4% slabinfo.taskstats.active_objs
447.40 Â 6% -35.7% 287.50 Â 6% -7.7% 413.00 Â 4% slabinfo.taskstats.num_objs
304731 Â 27% -45.5% 166107 Â 76% -98.3% 5031 Â 23% sched_debug.cfs_rq:/.MIN_vruntime.avg
12211047 Â 35% -38.5% 7509311 Â 78% -99.0% 118856 Â 40% sched_debug.cfs_rq:/.MIN_vruntime.max
1877477 Â 30% -41.5% 1098508 Â 77% -98.8% 21976 Â 14% sched_debug.cfs_rq:/.MIN_vruntime.stddev
18.91 Â 7% -3.9% 18.16 Â 8% +3.8e+06% 715502 Â 2% sched_debug.cfs_rq:/.load.avg
95.71 Â 45% -7.8% 88.20 Â 74% +1.1e+06% 1067373 Â 4% sched_debug.cfs_rq:/.load.max
19.94 Â 31% -9.6% 18.02 Â 52% +1.7e+06% 335607 Â 2% sched_debug.cfs_rq:/.load.stddev
21.16 Â 9% +12.3% 23.76 Â 9% +2890.4% 632.65 Â 3% sched_debug.cfs_rq:/.load_avg.avg
125.40 Â 49% +4.4% 130.90 Â 13% +643.9% 932.88 Â 5% sched_debug.cfs_rq:/.load_avg.max
8.29 Â 2% -3.5% 8.00 Â 6% +2852.1% 244.76 Â 6% sched_debug.cfs_rq:/.load_avg.min
20.18 Â 45% +13.1% 22.83 Â 18% +720.8% 165.65 Â 3% sched_debug.cfs_rq:/.load_avg.stddev
304731 Â 27% -45.5% 166107 Â 76% -98.3% 5031 Â 23% sched_debug.cfs_rq:/.max_vruntime.avg
12211047 Â 35% -38.5% 7509311 Â 78% -99.0% 118856 Â 40% sched_debug.cfs_rq:/.max_vruntime.max
1877477 Â 30% -41.5% 1098508 Â 77% -98.8% 21976 Â 14% sched_debug.cfs_rq:/.max_vruntime.stddev
29445770 Â 0% -4.3% 28190370 Â 2% -99.0% 299502 Â 0% sched_debug.cfs_rq:/.min_vruntime.avg
31331918 Â 0% -6.2% 29380072 Â 2% -99.0% 322082 Â 0% sched_debug.cfs_rq:/.min_vruntime.max
27785446 Â 0% -2.5% 27098935 Â 2% -99.0% 282267 Â 0% sched_debug.cfs_rq:/.min_vruntime.min
916182 Â 13% -35.6% 590123 Â 13% -98.6% 12421 Â 4% sched_debug.cfs_rq:/.min_vruntime.stddev
0.26 Â 6% -34.5% 0.17 Â 14% +34.0% 0.34 Â 3% sched_debug.cfs_rq:/.nr_running.stddev
16.42 Â 3% +6.9% 17.56 Â 3% +3319.3% 561.57 Â 4% sched_debug.cfs_rq:/.runnable_load_avg.avg
38.22 Â 28% +10.9% 42.38 Â 49% +2280.8% 909.91 Â 1% sched_debug.cfs_rq:/.runnable_load_avg.max
0.05 Â133% +4879.2% 2.72 Â 46% +4177.8% 2.33 Â 32% sched_debug.cfs_rq:/.runnable_load_avg.min
7.59 Â 17% +4.0% 7.90 Â 36% +3375.3% 263.95 Â 1% sched_debug.cfs_rq:/.runnable_load_avg.stddev
-897515 Â-52% -132.1% 288533 Â159% -97.6% -21836 Â -6% sched_debug.cfs_rq:/.spread0.avg
989517 Â 31% +49.2% 1476487 Â 23% -99.9% 748.12 Â129% sched_debug.cfs_rq:/.spread0.max
-2558887 Â-23% -68.7% -801084 Â-66% -98.5% -39072 Â -7% sched_debug.cfs_rq:/.spread0.min
916967 Â 13% -35.7% 589208 Â 13% -98.6% 12424 Â 4% sched_debug.cfs_rq:/.spread0.stddev
744.20 Â 0% +10.9% 825.23 Â 3% -38.6% 457.27 Â 3% sched_debug.cfs_rq:/.util_avg.min
58.07 Â 9% -28.4% 41.55 Â 19% +119.1% 127.19 Â 2% sched_debug.cfs_rq:/.util_avg.stddev
157158 Â 3% -35.8% 100942 Â 9% +135.5% 370117 Â 7% sched_debug.cpu.avg_idle.avg
600573 Â 2% -42.9% 342823 Â 21% +29.4% 777397 Â 2% sched_debug.cpu.avg_idle.max
133080 Â 6% -48.5% 68563 Â 21% +87.9% 250058 Â 0% sched_debug.cpu.avg_idle.stddev
11.80 Â 22% +96.1% 23.13 Â 24% -60.4% 4.67 Â 2% sched_debug.cpu.clock.stddev
11.80 Â 22% +96.1% 23.13 Â 24% -60.4% 4.67 Â 2% sched_debug.cpu.clock_task.stddev
16.49 Â 3% +10.3% 18.19 Â 7% +2983.8% 508.41 Â 6% sched_debug.cpu.cpu_load[0].avg
38.35 Â 28% +69.4% 64.95 Â 61% +2275.1% 910.76 Â 1% sched_debug.cpu.cpu_load[0].max
7.67 Â 18% +53.8% 11.79 Â 53% +3832.5% 301.43 Â 5% sched_debug.cpu.cpu_load[0].stddev
16.39 Â 2% +9.9% 18.01 Â 5% +3723.3% 626.64 Â 3% sched_debug.cpu.cpu_load[1].avg
37.87 Â 27% +51.3% 57.30 Â 47% +2294.6% 906.91 Â 1% sched_debug.cpu.cpu_load[1].max
3.91 Â 17% +48.0% 5.78 Â 15% +4683.7% 187.00 Â 5% sched_debug.cpu.cpu_load[1].min
6.84 Â 20% +45.5% 9.95 Â 41% +2455.5% 174.75 Â 2% sched_debug.cpu.cpu_load[1].stddev
16.57 Â 2% +8.4% 17.96 Â 4% +3666.4% 624.20 Â 3% sched_debug.cpu.cpu_load[2].avg
37.71 Â 24% +38.8% 52.35 Â 36% +2301.1% 905.42 Â 1% sched_debug.cpu.cpu_load[2].max
6.02 Â 6% +18.8% 7.15 Â 8% +3322.5% 205.97 Â 2% sched_debug.cpu.cpu_load[2].min
6.50 Â 19% +36.7% 8.89 Â 31% +2513.9% 169.85 Â 1% sched_debug.cpu.cpu_load[2].stddev
16.99 Â 1% +6.4% 18.07 Â 3% +3565.7% 622.77 Â 3% sched_debug.cpu.cpu_load[3].avg
36.87 Â 19% +33.9% 49.39 Â 28% +2345.3% 901.64 Â 1% sched_debug.cpu.cpu_load[3].max
7.33 Â 3% +5.3% 7.72 Â 8% +2833.8% 214.97 Â 4% sched_debug.cpu.cpu_load[3].min
6.11 Â 15% +34.9% 8.24 Â 23% +2636.3% 167.13 Â 1% sched_debug.cpu.cpu_load[3].stddev
17.32 Â 1% +4.8% 18.15 Â 2% +3491.8% 622.26 Â 3% sched_debug.cpu.cpu_load[4].avg
35.56 Â 12% +32.8% 47.23 Â 22% +2414.9% 894.39 Â 1% sched_debug.cpu.cpu_load[4].max
8.00 Â 5% -2.0% 7.84 Â 8% +2683.7% 222.70 Â 6% sched_debug.cpu.cpu_load[4].min
5.80 Â 9% +35.4% 7.85 Â 18% +2705.5% 162.77 Â 0% sched_debug.cpu.cpu_load[4].stddev
16851 Â 1% -16.8% 14014 Â 3% -15.6% 14218 Â 3% sched_debug.cpu.curr->pid.avg
19325 Â 0% -19.1% 15644 Â 2% -6.4% 18083 Â 0% sched_debug.cpu.curr->pid.max
5114 Â 8% -48.9% 2611 Â 16% +20.8% 6179 Â 4% sched_debug.cpu.curr->pid.stddev
18.95 Â 7% -2.9% 18.40 Â 11% +3.7e+06% 708609 Â 3% sched_debug.cpu.load.avg
95.67 Â 46% -7.6% 88.42 Â 74% +1.1e+06% 1067053 Â 4% sched_debug.cpu.load.max
19.89 Â 31% -5.7% 18.76 Â 54% +1.7e+06% 338423 Â 3% sched_debug.cpu.load.stddev
500000 Â 0% +0.0% 500000 Â 0% +14.4% 572147 Â 4% sched_debug.cpu.max_idle_balance_cost.max
0.00 Â 4% +10.8% 0.00 Â 9% +35.0% 0.00 Â 12% sched_debug.cpu.next_balance.stddev
1417 Â 3% -6.8% 1322 Â 11% +31.2% 1860 Â 8% sched_debug.cpu.nr_load_updates.stddev
9.75 Â 5% -13.6% 8.43 Â 4% -7.0% 9.07 Â 11% sched_debug.cpu.nr_running.avg
29.22 Â 2% -16.8% 24.30 Â 7% +25.6% 36.70 Â 5% sched_debug.cpu.nr_running.max
7.47 Â 4% -20.3% 5.95 Â 5% +39.8% 10.44 Â 7% sched_debug.cpu.nr_running.stddev
10261512 Â 0% +132.6% 23873264 Â 3% -10.3% 9200003 Â 2% sched_debug.cpu.nr_switches.avg
11634045 Â 1% +126.0% 26295756 Â 2% -3.0% 11281317 Â 1% sched_debug.cpu.nr_switches.max
8958320 Â 1% +141.1% 21601624 Â 4% -15.9% 7538372 Â 3% sched_debug.cpu.nr_switches.min
780364 Â 7% +61.6% 1261065 Â 4% +65.0% 1287398 Â 2% sched_debug.cpu.nr_switches.stddev
8.65 Â 13% +23.4% 10.68 Â 13% +170.0% 23.36 Â 13% sched_debug.cpu.nr_uninterruptible.max
-13.62 Â-17% +45.2% -19.77 Â-23% +112.3% -28.91 Â-27% sched_debug.cpu.nr_uninterruptible.min
4.45 Â 8% +29.8% 5.78 Â 10% +106.3% 9.18 Â 24% sched_debug.cpu.nr_uninterruptible.stddev
Best Regards,
Huang, Ying