[intel_pstate] adacdf3f2b8: +119.9% aim9.shell_rtns_3.ops_per_sec, +51.6% turbostat.Pkg_W
From: Fengguang Wu
Date: Wed Oct 01 2014 - 03:50:35 EST
Hi Dirk,
FYI, we noticed the below changes on commit
adacdf3f2b8e65aa441613cf61c4f598e9042690 ("intel_pstate: Remove C0 tracking")
test case: brickland3/aim9/300s-shell_rtns_3
v3.15-rc8 adacdf3f2b8e65aa441613cf6
--------------- -------------------------
125 Â 5% +119.9% 275 Â 1% TOTAL aim9.shell_rtns_3.ops_per_sec
96.81 Â 3% +51.6% 146.77 Â 5% TOTAL turbostat.Pkg_W
36.74 Â 9% +121.4% 81.34 Â10% TOTAL turbostat.Cor_W
38.36 Â 1% +63.6% 62.76 Â 0% TOTAL turbostat.RAM_W
-13794 Â-5% -13.4% -11946 Â-6% TOTAL sched_debug.cfs_rq[1]:/.spread0
-10828 Â-8% -34.7% -7069 Â-20% TOTAL sched_debug.cfs_rq[34]:/.spread0
-14141 Â-8% -19.1% -11441 Â-19% TOTAL sched_debug.cfs_rq[24]:/.spread0
-13819 Â-7% -13.6% -11944 Â-6% TOTAL sched_debug.cfs_rq[7]:/.spread0
6006 Â 8% -71.6% 1703 Â21% TOTAL sched_debug.cpu#33.ttwu_local
6281 Â36% -68.3% 1988 Â49% TOTAL sched_debug.cpu#7.ttwu_count
3177 Â47% +235.5% 10660 Â 6% TOTAL cpuidle.C1-IVT-4S.usage
268 Â45% -68.5% 84 Â31% TOTAL sched_debug.cpu#45.ttwu_local
6658 Â34% -66.1% 2260 Â30% TOTAL sched_debug.cpu#21.ttwu_count
292 Â44% -71.7% 82 Â11% TOTAL sched_debug.cpu#23.ttwu_local
5351 Â 6% -61.8% 2045 Â29% TOTAL sched_debug.cpu#48.ttwu_local
2395 Â29% -62.9% 888 Â14% TOTAL sched_debug.cpu#37.ttwu_count
2269 Â11% +144.0% 5537 Â26% TOTAL sched_debug.cfs_rq[91]:/.blocked_load_avg
2040 Â17% +154.5% 5192 Â14% TOTAL sched_debug.cfs_rq[106]:/.blocked_load_avg
1.24 Â 6% +154.8% 3.15 Â 0% TOTAL turbostat.GHz
2417 Â10% +135.2% 5685 Â25% TOTAL sched_debug.cfs_rq[91]:/.tg_load_contrib
69 Â29% -52.4% 33 Â36% TOTAL sched_debug.cfs_rq[41]:/.avg->runnable_avg_sum
4422 Â29% -57.6% 1875 Â 9% TOTAL sched_debug.cpu#3.ttwu_local
2210 Â14% +140.9% 5324 Â14% TOTAL sched_debug.cfs_rq[106]:/.tg_load_contrib
1445 Â16% +126.6% 3276 Â14% TOTAL sched_debug.cfs_rq[3]:/.blocked_load_avg
1448 Â16% +127.4% 3293 Â14% TOTAL sched_debug.cfs_rq[3]:/.tg_load_contrib
561248 Â 4% +145.3% 1376953 Â 0% TOTAL cpuidle.C6-IVT-4S.usage
4975 Â28% -63.7% 1805 Â13% TOTAL sched_debug.cpu#4.ttwu_local
1348 Â19% +137.8% 3206 Â12% TOTAL sched_debug.cfs_rq[48]:/.blocked_load_avg
1696 Â13% +106.7% 3507 Â15% TOTAL sched_debug.cfs_rq[32]:/.tg_load_contrib
1684 Â13% +106.6% 3478 Â15% TOTAL sched_debug.cfs_rq[32]:/.blocked_load_avg
1619 Â22% +118.1% 3532 Â13% TOTAL sched_debug.cfs_rq[17]:/.blocked_load_avg
1626 Â22% +117.4% 3537 Â13% TOTAL sched_debug.cfs_rq[17]:/.tg_load_contrib
1354 Â19% +137.0% 3209 Â12% TOTAL sched_debug.cfs_rq[48]:/.tg_load_contrib
21314 Â 5% +125.6% 48083 Â 2% TOTAL sched_debug.cfs_rq[85]:/.tg_load_avg
21409 Â 5% +125.1% 48199 Â 1% TOTAL sched_debug.cfs_rq[83]:/.tg_load_avg
21340 Â 5% +125.8% 48193 Â 1% TOTAL sched_debug.cfs_rq[84]:/.tg_load_avg
21291 Â 5% +125.7% 48060 Â 2% TOTAL sched_debug.cfs_rq[86]:/.tg_load_avg
21191 Â 6% +126.2% 47929 Â 1% TOTAL sched_debug.cfs_rq[102]:/.tg_load_avg
21266 Â 5% +126.0% 48058 Â 2% TOTAL sched_debug.cfs_rq[90]:/.tg_load_avg
21289 Â 5% +125.7% 48054 Â 2% TOTAL sched_debug.cfs_rq[89]:/.tg_load_avg
21186 Â 6% +126.2% 47929 Â 1% TOTAL sched_debug.cfs_rq[101]:/.tg_load_avg
21314 Â 6% +125.8% 48131 Â 1% TOTAL sched_debug.cfs_rq[113]:/.tg_load_avg
21266 Â 5% +126.1% 48083 Â 2% TOTAL sched_debug.cfs_rq[91]:/.tg_load_avg
21298 Â 6% +125.3% 47981 Â 1% TOTAL sched_debug.cfs_rq[106]:/.tg_load_avg
21309 Â 6% +125.0% 47953 Â 1% TOTAL sched_debug.cfs_rq[105]:/.tg_load_avg
21178 Â 6% +126.4% 47953 Â 1% TOTAL sched_debug.cfs_rq[100]:/.tg_load_avg
21236 Â 6% +126.5% 48095 Â 1% TOTAL sched_debug.cfs_rq[93]:/.tg_load_avg
21271 Â 5% +126.0% 48077 Â 1% TOTAL sched_debug.cfs_rq[88]:/.tg_load_avg
21286 Â 5% +126.0% 48096 Â 2% TOTAL sched_debug.cfs_rq[87]:/.tg_load_avg
21269 Â 6% +126.1% 48093 Â 1% TOTAL sched_debug.cfs_rq[92]:/.tg_load_avg
21291 Â 6% +125.2% 47956 Â 1% TOTAL sched_debug.cfs_rq[104]:/.tg_load_avg
21303 Â 6% +125.3% 48005 Â 1% TOTAL sched_debug.cfs_rq[107]:/.tg_load_avg
21247 Â 6% +125.7% 47957 Â 1% TOTAL sched_debug.cfs_rq[94]:/.tg_load_avg
21350 Â 6% +125.7% 48185 Â 1% TOTAL sched_debug.cfs_rq[119]:/.tg_load_avg
21357 Â 6% +125.3% 48108 Â 1% TOTAL sched_debug.cfs_rq[114]:/.tg_load_avg
21263 Â 6% +125.6% 47968 Â 1% TOTAL sched_debug.cfs_rq[103]:/.tg_load_avg
21362 Â 6% +125.4% 48154 Â 1% TOTAL sched_debug.cfs_rq[118]:/.tg_load_avg
21470 Â 5% +124.6% 48223 Â 1% TOTAL sched_debug.cfs_rq[82]:/.tg_load_avg
1513 Â24% +122.0% 3358 Â16% TOTAL sched_debug.cfs_rq[47]:/.blocked_load_avg
21170 Â 6% +126.4% 47936 Â 1% TOTAL sched_debug.cfs_rq[98]:/.tg_load_avg
21216 Â 6% +126.0% 47943 Â 1% TOTAL sched_debug.cfs_rq[95]:/.tg_load_avg
21183 Â 6% +126.3% 47936 Â 1% TOTAL sched_debug.cfs_rq[97]:/.tg_load_avg
21351 Â 6% +125.3% 48106 Â 1% TOTAL sched_debug.cfs_rq[115]:/.tg_load_avg
21194 Â 6% +126.2% 47952 Â 1% TOTAL sched_debug.cfs_rq[96]:/.tg_load_avg
21181 Â 6% +126.3% 47929 Â 1% TOTAL sched_debug.cfs_rq[99]:/.tg_load_avg
21366 Â 6% +125.1% 48101 Â 1% TOTAL sched_debug.cfs_rq[116]:/.tg_load_avg
21352 Â 6% +125.5% 48145 Â 1% TOTAL sched_debug.cfs_rq[112]:/.tg_load_avg
21381 Â 6% +125.1% 48131 Â 1% TOTAL sched_debug.cfs_rq[117]:/.tg_load_avg
21507 Â 5% +124.3% 48244 Â 1% TOTAL sched_debug.cfs_rq[81]:/.tg_load_avg
21346 Â 6% +125.5% 48126 Â 1% TOTAL sched_debug.cfs_rq[111]:/.tg_load_avg
22339 Â 4% +124.5% 50156 Â 1% TOTAL sched_debug.cfs_rq[5]:/.tg_load_avg
21569 Â 5% +123.7% 48256 Â 1% TOTAL sched_debug.cfs_rq[80]:/.tg_load_avg
21343 Â 6% +125.0% 48018 Â 1% TOTAL sched_debug.cfs_rq[108]:/.tg_load_avg
1528 Â23% +120.8% 3373 Â16% TOTAL sched_debug.cfs_rq[47]:/.tg_load_contrib
21616 Â 5% +123.2% 48245 Â 2% TOTAL sched_debug.cfs_rq[78]:/.tg_load_avg
21595 Â 4% +124.7% 48525 Â 1% TOTAL sched_debug.cfs_rq[41]:/.tg_load_avg
21622 Â 5% +123.4% 48294 Â 2% TOTAL sched_debug.cfs_rq[77]:/.tg_load_avg
21571 Â 4% +123.7% 48245 Â 2% TOTAL sched_debug.cfs_rq[79]:/.tg_load_avg
22460 Â 4% +124.3% 50377 Â 1% TOTAL sched_debug.cfs_rq[4]:/.tg_load_avg
22257 Â 5% +124.2% 49910 Â 1% TOTAL sched_debug.cfs_rq[8]:/.tg_load_avg
22291 Â 5% +124.0% 49922 Â 1% TOTAL sched_debug.cfs_rq[7]:/.tg_load_avg
22586 Â 4% +123.3% 50430 Â 1% TOTAL sched_debug.cfs_rq[3]:/.tg_load_avg
22236 Â 5% +124.1% 49831 Â 1% TOTAL sched_debug.cfs_rq[9]:/.tg_load_avg
21599 Â 4% +124.4% 48473 Â 1% TOTAL sched_debug.cfs_rq[42]:/.tg_load_avg
22118 Â 6% +123.8% 49501 Â 1% TOTAL sched_debug.cfs_rq[12]:/.tg_load_avg
21591 Â 4% +124.8% 48544 Â 1% TOTAL sched_debug.cfs_rq[40]:/.tg_load_avg
21348 Â 6% +125.3% 48090 Â 1% TOTAL sched_debug.cfs_rq[110]:/.tg_load_avg
21636 Â 4% +123.4% 48331 Â 1% TOTAL sched_debug.cfs_rq[43]:/.tg_load_avg
22170 Â 5% +123.5% 49543 Â 1% TOTAL sched_debug.cfs_rq[11]:/.tg_load_avg
22117 Â 6% +123.8% 49505 Â 1% TOTAL sched_debug.cfs_rq[13]:/.tg_load_avg
21656 Â 5% +122.8% 48260 Â 1% TOTAL sched_debug.cfs_rq[44]:/.tg_load_avg
22206 Â 5% +123.4% 49613 Â 1% TOTAL sched_debug.cfs_rq[10]:/.tg_load_avg
22307 Â 4% +124.3% 50042 Â 1% TOTAL sched_debug.cfs_rq[6]:/.tg_load_avg
1438 Â22% +123.8% 3218 Â13% TOTAL sched_debug.cfs_rq[18]:/.blocked_load_avg
21389 Â 6% +124.7% 48064 Â 1% TOTAL sched_debug.cfs_rq[109]:/.tg_load_avg
1438 Â22% +124.0% 3222 Â13% TOTAL sched_debug.cfs_rq[18]:/.tg_load_contrib
21612 Â 4% +124.6% 48541 Â 1% TOTAL sched_debug.cfs_rq[39]:/.tg_load_avg
103 Â15% -56.4% 45 Â32% TOTAL sched_debug.cfs_rq[30]:/.avg->runnable_avg_sum
22746 Â 4% +122.6% 50626 Â 0% TOTAL sched_debug.cfs_rq[2]:/.tg_load_avg
22689 Â 4% +122.6% 50512 Â 0% TOTAL sched_debug.cfs_rq[1]:/.tg_load_avg
1498 Â12% +118.2% 3271 Â16% TOTAL sched_debug.cfs_rq[33]:/.tg_load_contrib
21710 Â 5% +122.1% 48223 Â 1% TOTAL sched_debug.cfs_rq[45]:/.tg_load_avg
21675 Â 5% +123.9% 48522 Â 1% TOTAL sched_debug.cfs_rq[68]:/.tg_load_avg
22702 Â 4% +123.0% 50638 Â 1% TOTAL sched_debug.cfs_rq[0]:/.tg_load_avg
21791 Â 5% +120.8% 48114 Â 1% TOTAL sched_debug.cfs_rq[47]:/.tg_load_avg
21768 Â 5% +122.1% 48352 Â 1% TOTAL sched_debug.cfs_rq[57]:/.tg_load_avg
21611 Â 4% +123.9% 48393 Â 1% TOTAL sched_debug.cfs_rq[72]:/.tg_load_avg
21668 Â 4% +124.2% 48578 Â 1% TOTAL sched_debug.cfs_rq[38]:/.tg_load_avg
21661 Â 5% +123.5% 48416 Â 1% TOTAL sched_debug.cfs_rq[71]:/.tg_load_avg
21653 Â 5% +123.0% 48291 Â 2% TOTAL sched_debug.cfs_rq[76]:/.tg_load_avg
21748 Â 5% +122.7% 48426 Â 1% TOTAL sched_debug.cfs_rq[62]:/.tg_load_avg
21770 Â 5% +121.2% 48162 Â 1% TOTAL sched_debug.cfs_rq[46]:/.tg_load_avg
21688 Â 5% +122.7% 48295 Â 2% TOTAL sched_debug.cfs_rq[75]:/.tg_load_avg
21651 Â 5% +123.5% 48392 Â 1% TOTAL sched_debug.cfs_rq[74]:/.tg_load_avg
101524 Â 6% +122.3% 225684 Â 4% TOTAL proc-vmstat.pgalloc_dma32
21758 Â 5% +122.5% 48417 Â 1% TOTAL sched_debug.cfs_rq[63]:/.tg_load_avg
21721 Â 5% +123.2% 48488 Â 1% TOTAL sched_debug.cfs_rq[67]:/.tg_load_avg
2057 Â10% +137.4% 4885 Â18% TOTAL sched_debug.cfs_rq[61]:/.blocked_load_avg
21704 Â 5% +123.2% 48439 Â 1% TOTAL sched_debug.cfs_rq[64]:/.tg_load_avg
21695 Â 5% +123.2% 48422 Â 1% TOTAL sched_debug.cfs_rq[70]:/.tg_load_avg
21706 Â 5% +123.1% 48428 Â 1% TOTAL sched_debug.cfs_rq[69]:/.tg_load_avg
21837 Â 5% +121.2% 48294 Â 1% TOTAL sched_debug.cfs_rq[56]:/.tg_load_avg
21761 Â 4% +122.3% 48370 Â 1% TOTAL sched_debug.cfs_rq[61]:/.tg_load_avg
21769 Â 4% +123.2% 48581 Â 1% TOTAL sched_debug.cfs_rq[37]:/.tg_load_avg
21704 Â 5% +123.4% 48479 Â 1% TOTAL sched_debug.cfs_rq[66]:/.tg_load_avg
21643 Â 5% +123.5% 48365 Â 1% TOTAL sched_debug.cfs_rq[73]:/.tg_load_avg
21693 Â 5% +123.5% 48480 Â 1% TOTAL sched_debug.cfs_rq[65]:/.tg_load_avg
1498 Â12% +117.5% 3260 Â16% TOTAL sched_debug.cfs_rq[33]:/.blocked_load_avg
21762 Â 4% +122.0% 48313 Â 1% TOTAL sched_debug.cfs_rq[58]:/.tg_load_avg
21873 Â 5% +120.1% 48132 Â 1% TOTAL sched_debug.cfs_rq[48]:/.tg_load_avg
21778 Â 4% +123.3% 48623 Â 1% TOTAL sched_debug.cfs_rq[36]:/.tg_load_avg
21887 Â 3% +122.3% 48647 Â 1% TOTAL sched_debug.cfs_rq[35]:/.tg_load_avg
22106 Â 6% +123.1% 49330 Â 1% TOTAL sched_debug.cfs_rq[14]:/.tg_load_avg
21743 Â 5% +122.5% 48382 Â 1% TOTAL sched_debug.cfs_rq[60]:/.tg_load_avg
21761 Â 5% +122.2% 48362 Â 1% TOTAL sched_debug.cfs_rq[59]:/.tg_load_avg
21933 Â 3% +121.8% 48653 Â 1% TOTAL sched_debug.cfs_rq[34]:/.tg_load_avg
22003 Â 3% +121.9% 48833 Â 1% TOTAL sched_debug.cfs_rq[29]:/.tg_load_avg
21927 Â 5% +119.2% 48069 Â 1% TOTAL sched_debug.cfs_rq[49]:/.tg_load_avg
21970 Â 4% +123.1% 49009 Â 2% TOTAL sched_debug.cfs_rq[22]:/.tg_load_avg
21979 Â 5% +123.1% 49045 Â 1% TOTAL sched_debug.cfs_rq[21]:/.tg_load_avg
22033 Â 5% +123.1% 49153 Â 1% TOTAL sched_debug.cfs_rq[19]:/.tg_load_avg
21979 Â 5% +118.8% 48084 Â 1% TOTAL sched_debug.cfs_rq[52]:/.tg_load_avg
22059 Â 3% +121.2% 48794 Â 1% TOTAL sched_debug.cfs_rq[28]:/.tg_load_avg
21984 Â 4% +122.9% 48996 Â 2% TOTAL sched_debug.cfs_rq[23]:/.tg_load_avg
21965 Â 4% +119.1% 48128 Â 1% TOTAL sched_debug.cfs_rq[53]:/.tg_load_avg
22037 Â 3% +121.4% 48784 Â 1% TOTAL sched_debug.cfs_rq[30]:/.tg_load_avg
22069 Â 3% +121.1% 48793 Â 1% TOTAL sched_debug.cfs_rq[31]:/.tg_load_avg
21957 Â 4% +119.6% 48208 Â 1% TOTAL sched_debug.cfs_rq[54]:/.tg_load_avg
21954 Â 5% +118.8% 48030 Â 1% TOTAL sched_debug.cfs_rq[50]:/.tg_load_avg
22117 Â 6% +122.8% 49277 Â 1% TOTAL sched_debug.cfs_rq[15]:/.tg_load_avg
21978 Â 5% +118.6% 48046 Â 1% TOTAL sched_debug.cfs_rq[51]:/.tg_load_avg
21963 Â 4% +119.7% 48246 Â 1% TOTAL sched_debug.cfs_rq[55]:/.tg_load_avg
22012 Â 3% +121.3% 48712 Â 1% TOTAL sched_debug.cfs_rq[32]:/.tg_load_avg
22037 Â 3% +121.6% 48838 Â 1% TOTAL sched_debug.cfs_rq[27]:/.tg_load_avg
22029 Â 3% +122.0% 48908 Â 1% TOTAL sched_debug.cfs_rq[26]:/.tg_load_avg
21990 Â 4% +122.3% 48891 Â 1% TOTAL sched_debug.cfs_rq[25]:/.tg_load_avg
22010 Â 4% +122.2% 48914 Â 1% TOTAL sched_debug.cfs_rq[24]:/.tg_load_avg
21985 Â 3% +121.4% 48671 Â 1% TOTAL sched_debug.cfs_rq[33]:/.tg_load_avg
22023 Â 5% +122.8% 49058 Â 2% TOTAL sched_debug.cfs_rq[20]:/.tg_load_avg
22054 Â 5% +123.3% 49240 Â 1% TOTAL sched_debug.cfs_rq[17]:/.tg_load_avg
22112 Â 6% +122.7% 49239 Â 1% TOTAL sched_debug.cfs_rq[16]:/.tg_load_avg
22058 Â 5% +122.8% 49137 Â 1% TOTAL sched_debug.cfs_rq[18]:/.tg_load_avg
2237 Â 9% +124.1% 5013 Â18% TOTAL sched_debug.cfs_rq[61]:/.tg_load_contrib
15018 Â 5% +114.2% 32164 Â 0% TOTAL sched_debug.cpu#61.ttwu_local
15138 Â 5% +113.2% 32273 Â 1% TOTAL sched_debug.cpu#91.ttwu_local
15141 Â 6% +116.6% 32798 Â 4% TOTAL sched_debug.cpu#106.ttwu_local
13386019 Â 4% +113.7% 28610387 Â 2% TOTAL proc-vmstat.pgalloc_normal
13484197 Â 4% +113.8% 28830514 Â 2% TOTAL proc-vmstat.pgfree
76806 Â 5% +111.6% 162514 Â 1% TOTAL sched_debug.cpu#91.nr_switches
76808 Â 5% +111.8% 162667 Â 1% TOTAL sched_debug.cpu#91.sched_count
13084681 Â 4% +113.2% 27900527 Â 2% TOTAL proc-vmstat.numa_local
13084697 Â 4% +113.2% 27900563 Â 2% TOTAL proc-vmstat.numa_hit
76596 Â 6% +112.9% 163099 Â 2% TOTAL sched_debug.cpu#106.nr_switches
30771 Â 6% +111.9% 65201 Â 2% TOTAL sched_debug.cpu#106.sched_goidle
30881 Â 5% +110.9% 65122 Â 2% TOTAL sched_debug.cpu#91.sched_goidle
18389572 Â 5% +111.8% 38945748 Â 1% TOTAL proc-vmstat.pgfault
3250107 Â 5% +111.8% 6885051 Â 1% TOTAL numa-numastat.node1.local_node
3250110 Â 5% +111.8% 6885062 Â 1% TOTAL numa-numastat.node1.numa_hit
3262764 Â 5% +115.3% 7024497 Â 4% TOTAL numa-numastat.node2.numa_hit
3262758 Â 5% +115.3% 7024485 Â 4% TOTAL numa-numastat.node2.local_node
3279215 Â 4% +113.9% 7015147 Â 4% TOTAL numa-numastat.node0.numa_hit
3279211 Â 4% +113.9% 7015136 Â 4% TOTAL numa-numastat.node0.local_node
76121 Â 4% +112.9% 162034 Â 1% TOTAL sched_debug.cpu#61.nr_switches
77527 Â 6% +109.0% 162036 Â 1% TOTAL sched_debug.cpu#61.sched_count
243.30 Â33% -51.6% 117.69 Â29% TOTAL sched_debug.cfs_rq[92]:/.exec_clock
30594 Â 4% +112.1% 64904 Â 2% TOTAL sched_debug.cpu#61.sched_goidle
3281833 Â 4% +109.8% 6886537 Â 1% TOTAL numa-numastat.node3.local_node
3281836 Â 4% +109.8% 6886541 Â 1% TOTAL numa-numastat.node3.numa_hit
78218 Â 6% +109.1% 163583 Â 3% TOTAL sched_debug.cpu#106.sched_count
1727502 Â 6% +107.5% 3583823 Â 4% TOTAL numa-vmstat.node2.numa_local
1742994 Â 5% +103.7% 3550217 Â 1% TOTAL numa-vmstat.node3.numa_local
1794367 Â 5% +101.6% 3617858 Â 1% TOTAL numa-vmstat.node1.numa_local
1810000 Â 5% +102.6% 3666376 Â 3% TOTAL numa-vmstat.node2.numa_hit
1825404 Â 5% +99.0% 3632638 Â 1% TOTAL numa-vmstat.node3.numa_hit
1816414 Â 5% +101.8% 3666109 Â 3% TOTAL numa-vmstat.node0.numa_local
1843627 Â 6% +100.6% 3698703 Â 1% TOTAL numa-vmstat.node1.numa_hit
3135 Â12% -46.7% 1672 Â15% TOTAL sched_debug.cpu#34.ttwu_local
1849929 Â 4% +98.3% 3668167 Â 3% TOTAL numa-vmstat.node0.numa_hit
8992 Â13% -50.9% 4418 Â41% TOTAL sched_debug.cpu#30.sched_count
241.28 Â24% -40.6% 143.43 Â25% TOTAL sched_debug.cfs_rq[11]:/.exec_clock
18020 Â39% -55.2% 8066 Â16% TOTAL sched_debug.cpu#4.ttwu_count
319 Â22% +61.6% 516 Â13% TOTAL cpuidle.C1E-IVT-4S.usage
4156 Â15% -47.6% 2176 Â41% TOTAL sched_debug.cpu#30.sched_goidle
8343 Â15% -47.6% 4375 Â41% TOTAL sched_debug.cpu#30.nr_switches
29165 Â 1% +76.4% 51461 Â 3% TOTAL sched_debug.cpu#106.ttwu_count
28980 Â 2% +73.4% 50247 Â 1% TOTAL sched_debug.cpu#61.ttwu_count
29138 Â 1% +74.5% 50853 Â 1% TOTAL sched_debug.cpu#91.ttwu_count
22537 Â 8% +70.7% 38465 Â19% TOTAL sched_debug.cpu#47.ttwu_count
1641 Â 3% +67.9% 2757 Â 1% TOTAL proc-vmstat.pgactivate
131 Â19% -37.4% 82 Â 5% TOTAL sched_debug.cpu#106.cpu_load[4]
13130 Â 2% +62.4% 21321 Â 8% TOTAL sched_debug.cpu#47.sched_goidle
7089 Â13% +54.9% 10979 Â11% TOTAL sched_debug.cpu#20.sched_goidle
26562 Â 2% +61.5% 42903 Â 7% TOTAL sched_debug.cpu#47.nr_switches
14233 Â13% +54.5% 21991 Â11% TOTAL sched_debug.cpu#20.nr_switches
88 Â17% +54.3% 135 Â25% TOTAL sched_debug.cpu#107.ttwu_local
4777 Â12% +54.7% 7389 Â14% TOTAL sched_debug.cfs_rq[34]:/.min_vruntime
119 Â12% -32.7% 80 Â 8% TOTAL sched_debug.cpu#61.cpu_load[4]
149 Â17% -33.5% 99 Â 9% TOTAL sched_debug.cpu#106.cpu_load[3]
11071 Â17% -26.7% 8120 Â22% TOTAL sched_debug.cpu#34.ttwu_count
3831 Â 6% +42.6% 5463 Â 7% TOTAL numa-meminfo.node2.KernelStack
1712 Â11% -43.0% 975 Â22% TOTAL sched_debug.cpu#1.ttwu_local
239 Â 6% +41.7% 339 Â 7% TOTAL numa-vmstat.node2.nr_kernel_stack
3638 Â24% -32.8% 2443 Â32% TOTAL sched_debug.cpu#1.ttwu_count
135 Â 7% -37.0% 85 Â11% TOTAL sched_debug.cpu#91.cpu_load[4]
5131 Â18% -21.3% 4038 Â 5% TOTAL meminfo.AnonHugePages
227 Â12% -28.6% 162 Â18% TOTAL sched_debug.cpu#91.cpu_load[0]
66199 Â 6% +49.7% 99076 Â 1% TOTAL sched_debug.cpu#106.nr_load_updates
31880 Â 6% +41.2% 45012 Â10% TOTAL sched_debug.cpu#47.sched_count
13581 Â 3% +47.7% 20066 Â 5% TOTAL sched_debug.cpu#32.sched_goidle
29309 Â12% +41.2% 41372 Â 9% TOTAL sched_debug.cpu#32.sched_count
69667 Â 4% +42.5% 99307 Â 1% TOTAL sched_debug.cpu#91.nr_load_updates
160 Â15% -26.9% 117 Â18% TOTAL sched_debug.cfs_rq[61]:/.load
27436 Â 3% +47.3% 40401 Â 5% TOTAL sched_debug.cpu#32.nr_switches
70549 Â 4% +41.8% 100061 Â 1% TOTAL sched_debug.cpu#61.nr_load_updates
13693 Â 5% +48.4% 20325 Â 6% TOTAL sched_debug.cpu#17.sched_goidle
3973 Â 6% +41.4% 5619 Â 6% TOTAL numa-meminfo.node3.KernelStack
27719 Â 5% +47.9% 40984 Â 6% TOTAL sched_debug.cpu#17.nr_switches
248 Â 6% +40.7% 349 Â 6% TOTAL numa-vmstat.node3.nr_kernel_stack
6508 Â 1% +49.1% 9705 Â22% TOTAL sched_debug.cpu#35.sched_goidle
138 Â14% -27.4% 100 Â10% TOTAL sched_debug.cpu#61.cpu_load[3]
13073 Â 2% +48.7% 19438 Â22% TOTAL sched_debug.cpu#35.nr_switches
666 Â14% +50.7% 1004 Â17% TOTAL cpuidle.C3-IVT-4S.usage
80 Â33% -45.9% 43 Â34% TOTAL sched_debug.cfs_rq[39]:/.avg->runnable_avg_sum
19457 Â 7% +31.6% 25610 Â 7% TOTAL sched_debug.cpu#47.nr_load_updates
21711 Â13% +45.4% 31570 Â19% TOTAL sched_debug.cpu#17.ttwu_count
13418 Â 3% +45.3% 19492 Â22% TOTAL sched_debug.cpu#35.sched_count
22622 Â 5% +37.5% 31103 Â15% TOTAL sched_debug.cpu#32.ttwu_count
21 Â 9% -19.6% 17 Â12% TOTAL sched_debug.cpu#99.ttwu_local
191 Â12% -25.1% 143 Â10% TOTAL sched_debug.cpu#91.cpu_load[1]
154 Â 9% -31.6% 105 Â11% TOTAL sched_debug.cpu#91.cpu_load[3]
160 Â15% -27.2% 116 Â14% TOTAL sched_debug.cpu#106.cpu_load[2]
15464 Â15% +43.3% 22163 Â11% TOTAL sched_debug.cpu#20.sched_count
176 Â14% -24.4% 133 Â23% TOTAL sched_debug.cpu#106.cpu_load[1]
169 Â10% -25.2% 126 Â10% TOTAL sched_debug.cpu#91.cpu_load[2]
20 Â10% -15.5% 17 Â 8% TOTAL sched_debug.cpu#74.ttwu_local
87151 Â17% +34.6% 117307 Â 3% TOTAL sched_debug.cfs_rq[106]:/.spread0
23131 Â 9% -20.8% 18314 Â12% TOTAL sched_debug.cpu#33.ttwu_count
9097 Â11% -23.5% 6955 Â 8% TOTAL sched_debug.cfs_rq[0]:/.exec_clock
1485 Â 2% +25.6% 1866 Â 7% TOTAL proc-vmstat.nr_kernel_stack
1788 Â 2% -20.6% 1419 Â 0% TOTAL sched_debug.cfs_rq[106]:/.tg->runnable_avg
1810 Â 2% -20.6% 1436 Â 0% TOTAL sched_debug.cfs_rq[119]:/.tg->runnable_avg
1809 Â 2% -20.7% 1435 Â 0% TOTAL sched_debug.cfs_rq[118]:/.tg->runnable_avg
1784 Â 2% -20.6% 1417 Â 0% TOTAL sched_debug.cfs_rq[105]:/.tg->runnable_avg
1798 Â 2% -20.7% 1426 Â 0% TOTAL sched_debug.cfs_rq[111]:/.tg->runnable_avg
1803 Â 2% -20.5% 1433 Â 0% TOTAL sched_debug.cfs_rq[115]:/.tg->runnable_avg
1801 Â 2% -20.5% 1431 Â 0% TOTAL sched_debug.cfs_rq[114]:/.tg->runnable_avg
1790 Â 2% -20.6% 1421 Â 0% TOTAL sched_debug.cfs_rq[107]:/.tg->runnable_avg
1799 Â 2% -20.6% 1428 Â 0% TOTAL sched_debug.cfs_rq[112]:/.tg->runnable_avg
1792 Â 2% -20.6% 1423 Â 0% TOTAL sched_debug.cfs_rq[108]:/.tg->runnable_avg
1782 Â 2% -20.5% 1415 Â 0% TOTAL sched_debug.cfs_rq[104]:/.tg->runnable_avg
1800 Â 2% -20.6% 1430 Â 0% TOTAL sched_debug.cfs_rq[113]:/.tg->runnable_avg
1805 Â 2% -20.6% 1434 Â 0% TOTAL sched_debug.cfs_rq[116]:/.tg->runnable_avg
1806 Â 2% -20.6% 1435 Â 0% TOTAL sched_debug.cfs_rq[117]:/.tg->runnable_avg
1795 Â 2% -20.7% 1424 Â 0% TOTAL sched_debug.cfs_rq[109]:/.tg->runnable_avg
95310 Â 4% +23.7% 117875 Â 2% TOTAL sched_debug.cfs_rq[91]:/.spread0
1796 Â 2% -20.7% 1425 Â 0% TOTAL sched_debug.cfs_rq[110]:/.tg->runnable_avg
1778 Â 2% -20.5% 1414 Â 0% TOTAL sched_debug.cfs_rq[103]:/.tg->runnable_avg
1771 Â 2% -20.3% 1411 Â 0% TOTAL sched_debug.cfs_rq[100]:/.tg->runnable_avg
1772 Â 2% -20.3% 1413 Â 0% TOTAL sched_debug.cfs_rq[101]:/.tg->runnable_avg
1768 Â 2% -20.3% 1410 Â 0% TOTAL sched_debug.cfs_rq[99]:/.tg->runnable_avg
1774 Â 2% -20.3% 1413 Â 0% TOTAL sched_debug.cfs_rq[102]:/.tg->runnable_avg
97534 Â 4% +21.8% 118768 Â 5% TOTAL sched_debug.cfs_rq[61]:/.spread0
1766 Â 2% -20.2% 1408 Â 0% TOTAL sched_debug.cfs_rq[98]:/.tg->runnable_avg
1762 Â 2% -20.1% 1407 Â 0% TOTAL sched_debug.cfs_rq[97]:/.tg->runnable_avg
1760 Â 2% -20.1% 1405 Â 0% TOTAL sched_debug.cfs_rq[96]:/.tg->runnable_avg
1756 Â 2% -20.0% 1405 Â 0% TOTAL sched_debug.cfs_rq[95]:/.tg->runnable_avg
1747 Â 2% -19.8% 1400 Â 0% TOTAL sched_debug.cfs_rq[92]:/.tg->runnable_avg
1753 Â 2% -19.9% 1404 Â 0% TOTAL sched_debug.cfs_rq[94]:/.tg->runnable_avg
1751 Â 2% -19.9% 1402 Â 0% TOTAL sched_debug.cfs_rq[93]:/.tg->runnable_avg
1743 Â 2% -19.8% 1398 Â 0% TOTAL sched_debug.cfs_rq[91]:/.tg->runnable_avg
23871 Â 2% +24.3% 29667 Â 8% TOTAL meminfo.KernelStack
1739 Â 2% -19.7% 1397 Â 0% TOTAL sched_debug.cfs_rq[90]:/.tg->runnable_avg
1734 Â 2% -19.6% 1395 Â 0% TOTAL sched_debug.cfs_rq[89]:/.tg->runnable_avg
1729 Â 2% -19.4% 1394 Â 0% TOTAL sched_debug.cfs_rq[88]:/.tg->runnable_avg
1725 Â 2% -19.3% 1392 Â 0% TOTAL sched_debug.cfs_rq[87]:/.tg->runnable_avg
1724 Â 2% -19.3% 1390 Â 0% TOTAL sched_debug.cfs_rq[86]:/.tg->runnable_avg
1721 Â 2% -19.3% 1389 Â 0% TOTAL sched_debug.cfs_rq[85]:/.tg->runnable_avg
1718 Â 2% -19.2% 1388 Â 0% TOTAL sched_debug.cfs_rq[84]:/.tg->runnable_avg
102757 Â13% +28.2% 131768 Â 3% TOTAL sched_debug.cfs_rq[106]:/.min_vruntime
1699 Â 2% -19.0% 1376 Â 0% TOTAL sched_debug.cfs_rq[77]:/.tg->runnable_avg
1701 Â 2% -19.0% 1378 Â 0% TOTAL sched_debug.cfs_rq[78]:/.tg->runnable_avg
1692 Â 2% -18.9% 1373 Â 0% TOTAL sched_debug.cfs_rq[75]:/.tg->runnable_avg
1695 Â 2% -19.0% 1373 Â 0% TOTAL sched_debug.cfs_rq[76]:/.tg->runnable_avg
1715 Â 2% -19.1% 1387 Â 0% TOTAL sched_debug.cfs_rq[83]:/.tg->runnable_avg
21038 Â 5% +20.1% 25263 Â 5% TOTAL sched_debug.cpu#17.nr_load_updates
1683 Â 2% -18.8% 1367 Â 0% TOTAL sched_debug.cfs_rq[71]:/.tg->runnable_avg
1709 Â 2% -18.9% 1385 Â 0% TOTAL sched_debug.cfs_rq[82]:/.tg->runnable_avg
1701 Â 2% -18.9% 1379 Â 0% TOTAL sched_debug.cfs_rq[79]:/.tg->runnable_avg
1686 Â 2% -18.8% 1369 Â 0% TOTAL sched_debug.cfs_rq[73]:/.tg->runnable_avg
1681 Â 2% -18.8% 1365 Â 0% TOTAL sched_debug.cfs_rq[70]:/.tg->runnable_avg
1705 Â 2% -18.9% 1382 Â 0% TOTAL sched_debug.cfs_rq[80]:/.tg->runnable_avg
1683 Â 2% -18.7% 1368 Â 0% TOTAL sched_debug.cfs_rq[72]:/.tg->runnable_avg
1672 Â 2% -18.5% 1362 Â 0% TOTAL sched_debug.cfs_rq[67]:/.tg->runnable_avg
1688 Â 2% -18.8% 1371 Â 0% TOTAL sched_debug.cfs_rq[74]:/.tg->runnable_avg
1663 Â 2% -18.5% 1356 Â 0% TOTAL sched_debug.cfs_rq[63]:/.tg->runnable_avg
1679 Â 2% -18.7% 1364 Â 0% TOTAL sched_debug.cfs_rq[69]:/.tg->runnable_avg
1670 Â 2% -18.5% 1362 Â 0% TOTAL sched_debug.cfs_rq[66]:/.tg->runnable_avg
1675 Â 2% -18.6% 1363 Â 0% TOTAL sched_debug.cfs_rq[68]:/.tg->runnable_avg
1665 Â 2% -18.5% 1357 Â 0% TOTAL sched_debug.cfs_rq[64]:/.tg->runnable_avg
1707 Â 2% -18.9% 1384 Â 0% TOTAL sched_debug.cfs_rq[81]:/.tg->runnable_avg
1667 Â 2% -18.5% 1359 Â 0% TOTAL sched_debug.cfs_rq[65]:/.tg->runnable_avg
152 Â14% -18.9% 123 Â12% TOTAL sched_debug.cpu#61.cpu_load[2]
1658 Â 2% -18.3% 1355 Â 0% TOTAL sched_debug.cfs_rq[62]:/.tg->runnable_avg
1652 Â 2% -18.1% 1353 Â 0% TOTAL sched_debug.cfs_rq[61]:/.tg->runnable_avg
1650 Â 2% -18.0% 1352 Â 0% TOTAL sched_debug.cfs_rq[60]:/.tg->runnable_avg
1643 Â 2% -17.9% 1348 Â 0% TOTAL sched_debug.cfs_rq[57]:/.tg->runnable_avg
113140 Â 4% +17.8% 133227 Â 5% TOTAL sched_debug.cfs_rq[61]:/.min_vruntime
1648 Â 2% -18.0% 1351 Â 0% TOTAL sched_debug.cfs_rq[59]:/.tg->runnable_avg
110916 Â 4% +19.3% 132335 Â 2% TOTAL sched_debug.cfs_rq[91]:/.min_vruntime
1625 Â 1% -17.2% 1346 Â 0% TOTAL sched_debug.cfs_rq[55]:/.tg->runnable_avg
1638 Â 2% -17.8% 1347 Â 0% TOTAL sched_debug.cfs_rq[56]:/.tg->runnable_avg
1646 Â 2% -17.9% 1350 Â 0% TOTAL sched_debug.cfs_rq[58]:/.tg->runnable_avg
1615 Â 1% -17.0% 1340 Â 0% TOTAL sched_debug.cfs_rq[51]:/.tg->runnable_avg
1616 Â 1% -17.0% 1341 Â 0% TOTAL sched_debug.cfs_rq[52]:/.tg->runnable_avg
1620 Â 1% -17.1% 1343 Â 0% TOTAL sched_debug.cfs_rq[53]:/.tg->runnable_avg
1611 Â 1% -17.0% 1337 Â 0% TOTAL sched_debug.cfs_rq[50]:/.tg->runnable_avg
1622 Â 1% -17.1% 1345 Â 0% TOTAL sched_debug.cfs_rq[54]:/.tg->runnable_avg
1605 Â 1% -16.8% 1335 Â 0% TOTAL sched_debug.cfs_rq[49]:/.tg->runnable_avg
1601 Â 1% -16.9% 1331 Â 0% TOTAL sched_debug.cfs_rq[48]:/.tg->runnable_avg
14782 Â 5% -18.7% 12017 Â 7% TOTAL sched_debug.cfs_rq[91]:/.avg->runnable_avg_sum
1595 Â 1% -16.8% 1327 Â 0% TOTAL sched_debug.cfs_rq[47]:/.tg->runnable_avg
321 Â 5% -18.9% 260 Â 7% TOTAL sched_debug.cfs_rq[91]:/.tg_runnable_contrib
17810 Â34% +39.1% 24766 Â 4% TOTAL sched_debug.cpu#2.nr_load_updates
1590 Â 1% -16.6% 1326 Â 0% TOTAL sched_debug.cfs_rq[46]:/.tg->runnable_avg
1587 Â 1% -16.5% 1324 Â 0% TOTAL sched_debug.cfs_rq[45]:/.tg->runnable_avg
1581 Â 1% -16.3% 1323 Â 0% TOTAL sched_debug.cfs_rq[44]:/.tg->runnable_avg
20364 Â 6% +21.0% 24646 Â 5% TOTAL sched_debug.cpu#32.nr_load_updates
23451 Â11% +16.0% 27201 Â10% TOTAL sched_debug.cpu#18.nr_load_updates
1576 Â 1% -16.1% 1322 Â 0% TOTAL sched_debug.cfs_rq[43]:/.tg->runnable_avg
4393 Â 3% +17.0% 5138 Â 2% TOTAL slabinfo.signal_cache.num_objs
1573 Â 1% -16.0% 1321 Â 0% TOTAL sched_debug.cfs_rq[42]:/.tg->runnable_avg
1568 Â 1% -15.9% 1319 Â 0% TOTAL sched_debug.cfs_rq[41]:/.tg->runnable_avg
1564 Â 1% -15.7% 1318 Â 0% TOTAL sched_debug.cfs_rq[40]:/.tg->runnable_avg
296 Â 4% -14.6% 253 Â 4% TOTAL sched_debug.cfs_rq[61]:/.tg_runnable_contrib
1560 Â 1% -15.5% 1318 Â 0% TOTAL sched_debug.cfs_rq[39]:/.tg->runnable_avg
1554 Â 1% -15.3% 1317 Â 0% TOTAL sched_debug.cfs_rq[38]:/.tg->runnable_avg
13680 Â 3% -14.7% 11667 Â 4% TOTAL sched_debug.cfs_rq[61]:/.avg->runnable_avg_sum
1544 Â 1% -14.8% 1315 Â 0% TOTAL sched_debug.cfs_rq[37]:/.tg->runnable_avg
3309 Â 5% -17.4% 2734 Â 2% TOTAL sched_debug.cfs_rq[32]:/.exec_clock
1534 Â 1% -14.6% 1310 Â 0% TOTAL sched_debug.cfs_rq[34]:/.tg->runnable_avg
1537 Â 1% -14.6% 1312 Â 0% TOTAL sched_debug.cfs_rq[35]:/.tg->runnable_avg
1540 Â 1% -14.7% 1314 Â 0% TOTAL sched_debug.cfs_rq[36]:/.tg->runnable_avg
553 Â43% +59.0% 879 Â 3% TOTAL numa-vmstat.node0.nr_kernel_stack
1530 Â 1% -14.5% 1308 Â 0% TOTAL sched_debug.cfs_rq[33]:/.tg->runnable_avg
1523 Â 1% -14.3% 1306 Â 0% TOTAL sched_debug.cfs_rq[32]:/.tg->runnable_avg
8851 Â43% +59.3% 14097 Â 3% TOTAL numa-meminfo.node0.KernelStack
1519 Â 1% -14.1% 1306 Â 0% TOTAL sched_debug.cfs_rq[31]:/.tg->runnable_avg
1516 Â 1% -14.0% 1304 Â 0% TOTAL sched_debug.cfs_rq[30]:/.tg->runnable_avg
1510 Â 1% -14.0% 1300 Â 0% TOTAL sched_debug.cfs_rq[28]:/.tg->runnable_avg
1513 Â 1% -14.0% 1302 Â 0% TOTAL sched_debug.cfs_rq[29]:/.tg->runnable_avg
1507 Â 1% -13.9% 1297 Â 0% TOTAL sched_debug.cfs_rq[27]:/.tg->runnable_avg
1504 Â 1% -13.8% 1296 Â 0% TOTAL sched_debug.cfs_rq[26]:/.tg->runnable_avg
1496 Â 1% -13.6% 1293 Â 0% TOTAL sched_debug.cfs_rq[24]:/.tg->runnable_avg
1492 Â 1% -13.5% 1292 Â 0% TOTAL sched_debug.cfs_rq[23]:/.tg->runnable_avg
1499 Â 1% -13.6% 1295 Â 0% TOTAL sched_debug.cfs_rq[25]:/.tg->runnable_avg
795010 Â 3% -10.6% 710653 Â 6% TOTAL sched_debug.cpu#32.avg_idle
1489 Â 1% -13.3% 1291 Â 0% TOTAL sched_debug.cfs_rq[22]:/.tg->runnable_avg
1467 Â 1% -12.9% 1278 Â 0% TOTAL sched_debug.cfs_rq[17]:/.tg->runnable_avg
1485 Â 1% -13.2% 1290 Â 0% TOTAL sched_debug.cfs_rq[21]:/.tg->runnable_avg
1463 Â 1% -12.8% 1276 Â 0% TOTAL sched_debug.cfs_rq[16]:/.tg->runnable_avg
1027 Â 6% +16.2% 1194 Â 4% TOTAL slabinfo.kmalloc-192.active_slabs
1027 Â 6% +16.2% 1194 Â 4% TOTAL slabinfo.kmalloc-192.num_slabs
43031 Â 6% +16.3% 50041 Â 4% TOTAL slabinfo.kmalloc-192.active_objs
43170 Â 6% +16.2% 50161 Â 4% TOTAL slabinfo.kmalloc-192.num_objs
1472 Â 1% -12.9% 1282 Â 0% TOTAL sched_debug.cfs_rq[18]:/.tg->runnable_avg
1479 Â 1% -13.0% 1287 Â 0% TOTAL sched_debug.cfs_rq[20]:/.tg->runnable_avg
1456 Â 1% -12.5% 1273 Â 0% TOTAL sched_debug.cfs_rq[15]:/.tg->runnable_avg
1452 Â 1% -12.3% 1273 Â 0% TOTAL sched_debug.cfs_rq[14]:/.tg->runnable_avg
862 Â 8% -12.9% 750 Â 5% TOTAL slabinfo.RAW.num_objs
862 Â 8% -12.9% 750 Â 5% TOTAL slabinfo.RAW.active_objs
1475 Â 1% -12.9% 1284 Â 0% TOTAL sched_debug.cfs_rq[19]:/.tg->runnable_avg
4393 Â 3% +14.5% 5028 Â 2% TOTAL slabinfo.signal_cache.active_objs
1446 Â 1% -12.1% 1272 Â 0% TOTAL sched_debug.cfs_rq[12]:/.tg->runnable_avg
1448 Â 1% -12.1% 1273 Â 0% TOTAL sched_debug.cfs_rq[13]:/.tg->runnable_avg
1442 Â 1% -11.9% 1271 Â 0% TOTAL sched_debug.cfs_rq[11]:/.tg->runnable_avg
1439 Â 1% -11.7% 1271 Â 0% TOTAL sched_debug.cfs_rq[10]:/.tg->runnable_avg
1437 Â 1% -11.5% 1271 Â 0% TOTAL sched_debug.cfs_rq[9]:/.tg->runnable_avg
1431 Â 1% -11.2% 1270 Â 0% TOTAL sched_debug.cfs_rq[8]:/.tg->runnable_avg
1428 Â 1% -11.1% 1269 Â 0% TOTAL sched_debug.cfs_rq[7]:/.tg->runnable_avg
1423 Â 1% -10.8% 1270 Â 0% TOTAL sched_debug.cfs_rq[6]:/.tg->runnable_avg
1421 Â 1% -10.6% 1270 Â 0% TOTAL sched_debug.cfs_rq[5]:/.tg->runnable_avg
1418 Â 1% -10.5% 1269 Â 0% TOTAL sched_debug.cfs_rq[4]:/.tg->runnable_avg
1417 Â 1% -10.5% 1268 Â 0% TOTAL sched_debug.cfs_rq[3]:/.tg->runnable_avg
5041 Â 4% +12.8% 5687 Â 1% TOTAL slabinfo.task_xstate.active_objs
5041 Â 4% +12.8% 5687 Â 1% TOTAL slabinfo.task_xstate.num_objs
20 Â18% -18.6% 16 Â 2% TOTAL sched_debug.cpu#104.ttwu_local
83828 Â 1% +9.4% 91675 Â 3% TOTAL slabinfo.kmalloc-64.active_objs
1406 Â 1% -10.1% 1264 Â 0% TOTAL sched_debug.cfs_rq[2]:/.tg->runnable_avg
1404 Â 1% -10.1% 1262 Â 0% TOTAL sched_debug.cfs_rq[1]:/.tg->runnable_avg
109592 Â 4% +6.3% 116546 Â 2% TOTAL numa-meminfo.node1.FilePages
27397 Â 4% +6.3% 29136 Â 2% TOTAL numa-vmstat.node1.nr_file_pages
36 Â 2% +8.3% 39 Â 2% TOTAL turbostat.CTMP
1382 Â 1% -9.2% 1255 Â 0% TOTAL sched_debug.cfs_rq[0]:/.tg->runnable_avg
52240 Â 5% +8.9% 56888 Â 4% TOTAL numa-meminfo.node0.Slab
31564 Â 7% +14.9% 36254 Â 5% TOTAL numa-meminfo.node1.Active
1331 Â 0% +8.1% 1439 Â 3% TOTAL slabinfo.kmalloc-64.active_slabs
1331 Â 0% +8.1% 1439 Â 3% TOTAL slabinfo.kmalloc-64.num_slabs
85255 Â 0% +8.1% 92172 Â 3% TOTAL slabinfo.kmalloc-64.num_objs
217201 Â 5% +125.5% 489860 Â 0% TOTAL time.voluntary_context_switches
17206167 Â 5% +118.8% 37639010 Â 1% TOTAL time.minor_page_faults
115930 Â 5% +116.5% 251005 Â 1% TOTAL time.involuntary_context_switches
0.00 Â 9% +121.4% 0.00 Â10% TOTAL energy.energy-cores
0.00 Â 1% +63.6% 0.00 Â 0% TOTAL energy.energy-ram
0.00 Â 3% +51.6% 0.00 Â 5% TOTAL energy.energy-pkg
7352 Â 1% +39.9% 10285 Â 0% TOTAL vmstat.system.cs
89.70 Â 0% -14.0% 77.11 Â 1% TOTAL time.user_time
1.06 Â 0% -12.9% 0.92 Â 0% TOTAL turbostat.%c0
214 Â 0% +5.9% 227 Â 0% TOTAL time.system_time
time.user_time
95 ++---------------------------------------------------------------------+
| *.. |
| : *.. .*. |
90 *+.*..*..*.*.. : *. *..*..*..*..* |
| : |
| * |
85 ++ |
| |
80 ++ |
| O O |
| O O O O O O
75 O+ O O O O O O O O |
| O O O O O O O O O |
| |
70 ++---------------------------------------------------------------------+
time.system_time
235 ++--------------------------------------------------------------------+
| |
| O |
230 ++ O O O O O O O O O |
O O O O O O O |
| O O O O O O O O
225 ++ O |
| |
220 ++ |
| |
| * |
215 ++ + : .*.. |
| + : *.*..*. *..*.*..* |
*..*..*.*..* : .. |
210 ++---------------*----------------------------------------------------+
time.voluntary_context_switches
500000 ++------------------------------------------------------O--O--O----O
| O |
450000 ++ O |
| O O |
400000 ++ O O O O O O O O O O O |
| O O |
350000 O+ O O O O |
| |
300000 ++ |
| |
250000 ++ |
*..*.*..*..*. .*..*. .*.. |
200000 ++ *..*..*.*. *..*. * |
| |
150000 ++-----------------------------------------------------------------+
energy.energy-cores
2.5e-08 ++----------------------------------------------------------------+
| O |
| |
2e-08 ++ |
| O O O O
| O |
1.5e-08 ++ O |
| O O O O |
1e-08 O+ O O O O O O O O O O O O O O |
| *..*.*..*..*.*..*..*.*..* |
| : |
5e-09 ++ : |
| : |
| : |
0 *+-*-*--*--*------------------------------------------------------+
energy.energy-pkg
4e-08 ++----------------------------------------------------------------+
| O |
3.5e-08 ++ O
| O O O O |
3e-08 ++ O |
2.5e-08 ++ O O O O O O O O O O O O O O |
O O O O..O.*..*..*.*..*..*.*..* |
2e-08 ++ : |
| : |
1.5e-08 ++ : |
1e-08 ++ : |
| : |
5e-09 ++ : |
| : |
0 *+-*-*--*--*------------------------------------------------------+
aim9.shell_rtns_3.ops_per_sec
300 ++--------------------------------------------------------------------+
280 ++ O |
| O O O O
260 ++ O |
240 ++ O O |
| O O O O O O O O O O O |
220 ++ O O |
200 O+ O O O O |
180 ++ |
| |
160 ++ |
140 ++ |
| .*.*..*..*..*..*.*..* |
120 *+.*..*.*..*..*..*. |
100 ++--------------------------------------------------------------------+
turbostat.%c0
1.12 ++-------------------------------------------------------------------+
1.1 ++ *..* |
| + + |
1.08 ++ .*. .* + |
1.06 *+.*. *..*..*. *..*.*..*..*..* |
1.04 ++ |
1.02 ++ |
| |
1 ++ |
0.98 O+ O O O O O |
0.96 ++ O O O O O O O O O O O O O |
0.94 ++ O O |
| O |
0.92 ++ O O O O
0.9 ++-------------------------------------------------------------------+
turbostat.Pkg_W
170 ++--------------------------------------------------------------------+
| O |
160 ++ |
150 ++ |
| O
140 ++ O O O |
| |
130 ++ O |
| O |
120 ++ |
110 ++ O O O O |
O O O O O O O O O O O O O O O |
100 ++ |
*..*..*.*..*..*..*..*.*..*..*..*..*.*..* |
90 ++--------------------------------------------------------------------+
turbostat.Cor_W
100 ++--------------------------------------------------------------O-----+
| |
90 ++ |
| |
80 ++ O
| O O O |
70 ++ O |
| O |
60 ++ |
| O |
50 ++ O O O O O O O O O O O O O |
O O O O O |
40 ++ |
*..*..*.*..*..*..*..*.*..*..*..*..*.*..* |
30 ++--------------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
To reproduce:
apt-get install ruby ruby-oj
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/setup-local job.yaml # the job file attached in this email
bin/run-local job.yaml
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Fengguang
---
testcase: aim9
default_monitors:
watch-oom:
wait: pre-test
uptime:
iostat:
vmstat:
numa-numastat:
numa-vmstat:
numa-meminfo:
proc-vmstat:
proc-stat:
meminfo:
slabinfo:
interrupts:
lock_stat:
latency_stats:
softirqs:
bdi_dev_mapping:
diskstats:
energy:
cpuidle:
cpufreq:
turbostat:
sched_debug:
interval: 10
pmeter:
model: Brickland Ivy Bridge-EX
nr_cpu: 120
memory: 512G
hdd_partitions:
swap_partitions:
aim9:
testtime: 300s
test:
- shell_rtns_3
branch: linus/master
commit: 19583ca584d6f574384e17fe7613dfaeadcdc4a6
repeat_to: 3
enqueue_time: 2014-09-25 21:56:15.069539322 +08:00
testbox: brickland3
kconfig: x86_64-rhel
kernel: "/kernel/x86_64-rhel/19583ca584d6f574384e17fe7613dfaeadcdc4a6/vmlinuz-3.16.0"
user: lkp
queue: wfg
result_root: "/result/brickland3/aim9/300s-shell_rtns_3/debian-x86_64.cgz/x86_64-rhel/19583ca584d6f574384e17fe7613dfaeadcdc4a6/0"
job_file: "/lkp/scheduled/brickland3/wfg_aim9-300s-shell_rtns_3-x86_64-rhel-19583ca584d6f574384e17fe7613dfaeadcdc4a6-2.yaml"
dequeue_time: 2014-09-30 11:42:21.894696145 +08:00
history_time: 348.87
job_state: finished
loadavg: 0.84 0.66 0.31 1/967 52904
start_time: '1412048625'
end_time: '1412048925'
version: "/lkp/lkp/.src-20140929-152043"
_______________________________________________
LKP mailing list
LKP@xxxxxxxxxxxxxxx