[loopback] b17c706987f: +175.7% netperf.Throughput_Mbps
From: Fengguang Wu
Date: Wed Oct 01 2014 - 03:55:19 EST
Hi Daniel,
FYI, we noticed nice performance improvement in commit
b17c706987fa6f28bdc1771c8266e7a69e22adcb ("loopback: sctp: add NETIF_F_SCTP_CSUM to device features")
test case: lkp-nex04/netperf/300s-200%-10K-SCTP_STREAM_MANY
72f8e06f3ea022d b17c706987fa6f28bdc1771c8
--------------- -------------------------
%stddev %change %stddev
\ | /
664 Â 0% +175.7% 1832 Â 0% TOTAL netperf.Throughput_Mbps
0 Â 0% +Inf% 1 Â 0% TOTAL sched_debug.cfs_rq[63]:/.nr_running
947669 Â 2% +681.8% 7408572 Â 1% TOTAL sched_debug.cfs_rq[63]:/.min_vruntime
19701 Â 3% +2814.0% 574098 Â 1% TOTAL sched_debug.cpu#63.ttwu_local
41754 Â 1% -99.5% 200 Â43% TOTAL softirqs.HRTIMER
5 Â20% +400.0% 29 Â 2% TOTAL sched_debug.cpu#63.cpu_load[4]
2.59 Â 1% -100.0% 0.00 Â 0% TOTAL perf-profile.cpu-cycles.intel_idle.cpuidle_enter_state.cpuidle_idle_call.arch_cpu_idle.cpu_startup_entry
0 Â 0% +Inf% 2 Â 0% TOTAL sched_debug.cpu#63.nr_running
72 Â48% -95.8% 3 Â42% TOTAL sched_debug.cfs_rq[62]:/.blocked_load_avg
0 Â 0% +Inf% 1 Â 0% TOTAL sched_debug.cfs_rq[62]:/.nr_running
0.24 Â 7% +2565.6% 6.50 Â 4% TOTAL perf-profile.cpu-cycles.sctp_transport_timeout.sctp_transport_reset_timers.sctp_outq_flush.sctp_outq_uncork.sctp_cmd_interpreter
0.22 Â 3% +1442.3% 3.42 Â 2% TOTAL perf-profile.cpu-cycles._raw_spin_lock_irqsave.get_page_from_freelist.__alloc_pages_nodemask.kmalloc_large_node.__kmalloc_node_track_caller
0.27 Â 6% +1100.0% 3.29 Â 2% TOTAL perf-profile.cpu-cycles._raw_spin_lock.free_one_page.__free_pages_ok.__free_pages.__free_memcg_kmem_pages
0.04 Â10% +8463.2% 3.25 Â 2% TOTAL perf-profile.cpu-cycles.lock_timer_base.isra.35.mod_timer.sctp_transport_reset_timers.sctp_outq_flush.sctp_outq_uncork
0.04 Â13% +6227.8% 2.28 Â 3% TOTAL perf-profile.cpu-cycles.memcpy.sctp_packet_transmit_chunk.sctp_outq_flush.sctp_outq_uncork.sctp_cmd_interpreter
0.00 +Inf% 1.97 Â 8% TOTAL perf-profile.cpu-cycles._raw_spin_lock_irqsave.mod_timer.sctp_transport_reset_timers.sctp_outq_flush.sctp_outq_uncork
11 Â44% +9151.8% 1036 Â31% TOTAL sched_debug.cfs_rq[62]:/.nr_spread_over
1.15 Â 7% -94.4% 0.06 Â 7% TOTAL perf-profile.cpu-cycles._raw_spin_lock_bh.lock_sock_nested.sctp_recvmsg.sock_common_recvmsg.sock_recvmsg
33688875 Â 2% -89.6% 3504734 Â18% TOTAL cpuidle.C1-NHM.time
281217 Â 2% -90.9% 25698 Â41% TOTAL cpuidle.C1-NHM.usage
45558795 Â 0% -99.9% 27898 Â39% TOTAL cpuidle.C3-NHM.usage
39.61 Â 0% -99.7% 0.11 Â30% TOTAL turbostat.%c1
5.60 Â 0% -94.0% 0.34 Â 1% TOTAL turbostat.%c3
876992 Â 3% +736.3% 7333987 Â 1% TOTAL sched_debug.cfs_rq[62]:/.min_vruntime
19686 Â 2% +2810.9% 573039 Â 1% TOTAL sched_debug.cpu#62.ttwu_local
5 Â25% +403.4% 29 Â 3% TOTAL sched_debug.cpu#62.cpu_load[4]
0 Â 0% +Inf% 2 Â 0% TOTAL sched_debug.cpu#62.nr_running
0 Â 0% +Inf% 1 Â 0% TOTAL sched_debug.cfs_rq[61]:/.nr_running
8 Â22% +13481.4% 1168 Â36% TOTAL sched_debug.cfs_rq[61]:/.nr_spread_over
899343 Â 6% +715.7% 7335525 Â 2% TOTAL sched_debug.cfs_rq[61]:/.min_vruntime
19673 Â 4% +2790.5% 568655 Â 2% TOTAL sched_debug.cpu#61.ttwu_local
989853 Â 2% -99.2% 7480 Â48% TOTAL sched_debug.cpu#61.sched_goidle
4 Â40% +581.0% 28 Â 3% TOTAL sched_debug.cpu#61.cpu_load[4]
0 Â 0% +Inf% 2 Â 0% TOTAL sched_debug.cpu#61.nr_running
0 Â 0% +Inf% 1 Â 0% TOTAL sched_debug.cfs_rq[60]:/.nr_running
893662 Â 5% +724.8% 7370847 Â 0% TOTAL sched_debug.cfs_rq[60]:/.min_vruntime
19607 Â 3% +2827.4% 573986 Â 1% TOTAL sched_debug.cpu#60.ttwu_local
0 Â 0% +Inf% 2 Â 0% TOTAL sched_debug.cpu#60.nr_running
0 Â 0% +Inf% 1 Â 0% TOTAL sched_debug.cfs_rq[59]:/.nr_running
86308 Â18% +7142.5% 6250836 Â 8% TOTAL sched_debug.cfs_rq[59]:/.max_vruntime
899110 Â 4% +717.5% 7350607 Â 0% TOTAL sched_debug.cfs_rq[59]:/.min_vruntime
0 Â 0% +Inf% 2 Â 0% TOTAL sched_debug.cpu#0.nr_running
86308 Â18% +7142.5% 6250836 Â 8% TOTAL sched_debug.cfs_rq[59]:/.MIN_vruntime
20221 Â 4% +2739.2% 574120 Â 1% TOTAL sched_debug.cpu#59.ttwu_local
0 Â 0% +Inf% 2 Â 0% TOTAL sched_debug.cpu#59.nr_running
0 Â 0% +Inf% 1 Â 0% TOTAL sched_debug.cfs_rq[58]:/.nr_running
370050 Â 3% -98.8% 4378 Â24% TOTAL sched_debug.cpu#0.sched_goidle
32479 Â 4% +1698.6% 584171 Â 1% TOTAL sched_debug.cpu#0.ttwu_local
947194 Â 4% +676.5% 7354983 Â 1% TOTAL sched_debug.cfs_rq[58]:/.min_vruntime
19 Â12% +26074.0% 5025 Â32% TOTAL sched_debug.cfs_rq[0]:/.nr_spread_over
0 Â 0% +Inf% 1 Â 0% TOTAL sched_debug.cfs_rq[0]:/.nr_running
20721 Â 4% +2663.3% 572601 Â 1% TOTAL sched_debug.cpu#58.ttwu_local
0 Â 0% +Inf% 2 Â 0% TOTAL sched_debug.cpu#58.nr_running
0 Â 0% +Inf% 1 Â33% TOTAL sched_debug.cfs_rq[57]:/.nr_running
0 Â 0% +Inf% 2 Â 0% TOTAL sched_debug.cpu#1.nr_running
964313 Â 4% +662.7% 7355115 Â 0% TOTAL sched_debug.cfs_rq[57]:/.min_vruntime
21834 Â 6% +2531.4% 574540 Â 1% TOTAL sched_debug.cpu#57.ttwu_local
0 Â 0% +Inf% 2 Â 0% TOTAL sched_debug.cpu#57.nr_running
354246 Â 1% -98.7% 4745 Â38% TOTAL sched_debug.cpu#1.sched_goidle
0 Â 0% +Inf% 1 Â 0% TOTAL sched_debug.cfs_rq[56]:/.nr_running
28829 Â 5% +1914.4% 580717 Â 0% TOTAL sched_debug.cpu#1.ttwu_local
0 Â 0% +Inf% 1 Â 0% TOTAL sched_debug.cfs_rq[1]:/.nr_running
960131 Â 3% +667.5% 7368750 Â 1% TOTAL sched_debug.cfs_rq[56]:/.min_vruntime
21418 Â 2% +2584.0% 574857 Â 0% TOTAL sched_debug.cpu#56.ttwu_local
0 Â 0% +Inf% 2 Â 0% TOTAL sched_debug.cpu#2.nr_running
4 Â17% +534.8% 29 Â 3% TOTAL sched_debug.cpu#56.cpu_load[4]
5 Â18% +433.3% 28 Â 2% TOTAL sched_debug.cpu#56.cpu_load[3]
0 Â 0% +Inf% 2 Â 0% TOTAL sched_debug.cpu#56.nr_running
0 Â 0% +Inf% 1 Â 0% TOTAL sched_debug.cfs_rq[55]:/.nr_running
71260 Â42% +8857.8% 6383312 Â 6% TOTAL sched_debug.cfs_rq[55]:/.max_vruntime
935340 Â 4% +686.3% 7354731 Â 1% TOTAL sched_debug.cfs_rq[55]:/.min_vruntime
363420 Â 1% -98.9% 3993 Â40% TOTAL sched_debug.cpu#2.sched_goidle
71260 Â42% +8857.8% 6383312 Â 6% TOTAL sched_debug.cfs_rq[55]:/.MIN_vruntime
19268 Â 3% +2867.0% 571688 Â 2% TOTAL sched_debug.cpu#55.ttwu_local
29222 Â 4% +1889.9% 581489 Â 0% TOTAL sched_debug.cpu#2.ttwu_local
5 Â33% +457.7% 29 Â 2% TOTAL sched_debug.cpu#55.cpu_load[4]
21 Â10% +5539.8% 1218 Â19% TOTAL sched_debug.cfs_rq[2]:/.nr_spread_over
0 Â 0% +Inf% 1 Â 0% TOTAL sched_debug.cfs_rq[2]:/.nr_running
0 Â 0% +Inf% 2 Â 0% TOTAL sched_debug.cpu#55.nr_running
0 Â 0% +Inf% 1 Â 0% TOTAL sched_debug.cfs_rq[54]:/.nr_running
0 Â 0% +Inf% 2 Â 0% TOTAL sched_debug.cpu#3.nr_running
45064 Â18% +14562.1% 6607460 Â 5% TOTAL sched_debug.cfs_rq[54]:/.max_vruntime
934539 Â 6% +685.1% 7337176 Â 1% TOTAL sched_debug.cfs_rq[54]:/.min_vruntime
45064 Â18% +14562.1% 6607460 Â 5% TOTAL sched_debug.cfs_rq[54]:/.MIN_vruntime
19262 Â 6% +2872.1% 572494 Â 1% TOTAL sched_debug.cpu#54.ttwu_local
0 Â 0% +Inf% 2 Â 0% TOTAL sched_debug.cpu#54.nr_running
356613 Â 1% -99.2% 2934 Â29% TOTAL sched_debug.cpu#3.sched_goidle
0 Â 0% +Inf% 1 Â 0% TOTAL sched_debug.cfs_rq[53]:/.nr_running
27577 Â 5% +2003.9% 580222 Â 1% TOTAL sched_debug.cpu#3.ttwu_local
0 Â 0% +Inf% 1 Â 0% TOTAL sched_debug.cfs_rq[3]:/.nr_running
933523 Â 4% +686.2% 7339367 Â 1% TOTAL sched_debug.cfs_rq[53]:/.min_vruntime
19832 Â 3% +2803.2% 575758 Â 1% TOTAL sched_debug.cpu#53.ttwu_local
0 Â 0% +Inf% 2 Â 0% TOTAL sched_debug.cpu#4.nr_running
5 Â46% +410.7% 28 Â 4% TOTAL sched_debug.cpu#53.cpu_load[3]
6 Â47% +354.8% 28 Â 5% TOTAL sched_debug.cpu#53.cpu_load[0]
0 Â 0% +Inf% 2 Â 0% TOTAL sched_debug.cpu#53.nr_running
0 Â 0% +Inf% 1 Â 0% TOTAL sched_debug.cfs_rq[52]:/.nr_running
923091 Â 3% +698.0% 7366148 Â 2% TOTAL sched_debug.cfs_rq[52]:/.min_vruntime
348410 Â 2% -98.8% 4230 Â39% TOTAL sched_debug.cpu#4.sched_goidle
19701 Â 3% +2820.0% 575293 Â 1% TOTAL sched_debug.cpu#52.ttwu_local
26645 Â 4% +2076.8% 580019 Â 1% TOTAL sched_debug.cpu#4.ttwu_local
178678 Â25% +3627.5% 6660167 Â 3% TOTAL sched_debug.cfs_rq[4]:/.MIN_vruntime
0 Â 0% +Inf% 2 Â 0% TOTAL sched_debug.cpu#52.nr_running
178678 Â25% +3627.5% 6660167 Â 3% TOTAL sched_debug.cfs_rq[4]:/.max_vruntime
0 Â 0% +Inf% 1 Â 0% TOTAL sched_debug.cfs_rq[4]:/.nr_running
0 Â 0% +Inf% 1 Â 0% TOTAL sched_debug.cfs_rq[51]:/.nr_running
0 Â 0% +Inf% 2 Â 0% TOTAL sched_debug.cpu#5.nr_running
929685 Â 4% +688.0% 7325719 Â 1% TOTAL sched_debug.cfs_rq[51]:/.min_vruntime
19761 Â 4% +2813.8% 575800 Â 1% TOTAL sched_debug.cpu#51.ttwu_local
5 Â37% +461.5% 29 Â 2% TOTAL sched_debug.cpu#51.cpu_load[4]
0 Â 0% +Inf% 2 Â 0% TOTAL sched_debug.cpu#51.nr_running
0 Â 0% +Inf% 1 Â 0% TOTAL sched_debug.cfs_rq[50]:/.nr_running
26189 Â 3% +2110.9% 579032 Â 0% TOTAL sched_debug.cpu#5.ttwu_local
963932 Â 3% +657.4% 7301196 Â 1% TOTAL sched_debug.cfs_rq[50]:/.min_vruntime
23557 Â 2% +2326.4% 571608 Â 2% TOTAL sched_debug.cpu#23.ttwu_local
0 Â 0% +Inf% 1 Â 0% TOTAL sched_debug.cfs_rq[5]:/.nr_running
20098 Â 5% +2752.9% 573394 Â 1% TOTAL sched_debug.cpu#50.ttwu_local
6 Â34% +380.0% 28 Â 1% TOTAL sched_debug.cpu#50.cpu_load[4]
6 Â35% +343.8% 28 Â 1% TOTAL sched_debug.cpu#50.cpu_load[3]
0 Â 0% +Inf% 2 Â 0% TOTAL sched_debug.cpu#6.nr_running
0 Â 0% +Inf% 2 Â 0% TOTAL sched_debug.cpu#50.nr_running
0 Â 0% +Inf% 1 Â 0% TOTAL sched_debug.cfs_rq[49]:/.nr_running
16 Â 8% +4700.0% 768 Â49% TOTAL sched_debug.cfs_rq[49]:/.nr_spread_over
25690 Â 2% +2152.9% 578774 Â 1% TOTAL sched_debug.cpu#6.ttwu_local
988774 Â 1% +643.1% 7347606 Â 1% TOTAL sched_debug.cfs_rq[49]:/.min_vruntime
17 Â16% +6484.7% 1119 Â36% TOTAL sched_debug.cfs_rq[6]:/.nr_spread_over
0 Â 0% +Inf% 1 Â 0% TOTAL sched_debug.cfs_rq[6]:/.nr_running
21655 Â 3% +2554.9% 574933 Â 1% TOTAL sched_debug.cpu#49.ttwu_local
917931 Â 0% -99.2% 7364 Â47% TOTAL sched_debug.cpu#49.sched_goidle
0 Â 0% +Inf% 2 Â 0% TOTAL sched_debug.cpu#49.nr_running
0 Â 0% +Inf% 2 Â 0% TOTAL sched_debug.cpu#7.nr_running
0 Â 0% +Inf% 1 Â 0% TOTAL sched_debug.cfs_rq[48]:/.nr_running
963896 Â 5% +662.9% 7353616 Â 1% TOTAL sched_debug.cfs_rq[48]:/.min_vruntime
20962 Â 5% +2638.5% 574061 Â 1% TOTAL sched_debug.cpu#48.ttwu_local
342563 Â 2% -99.0% 3297 Â40% TOTAL sched_debug.cpu#7.sched_goidle
25834 Â 4% +2138.5% 578300 Â 0% TOTAL sched_debug.cpu#7.ttwu_local
4 Â48% +500.0% 28 Â 1% TOTAL sched_debug.cpu#48.cpu_load[3]
0 Â 0% +Inf% 1 Â 0% TOTAL sched_debug.cfs_rq[7]:/.nr_running
4 Â44% +513.0% 28 Â 1% TOTAL sched_debug.cpu#48.cpu_load[0]
0 Â 0% +Inf% 2 Â 0% TOTAL sched_debug.cpu#48.nr_running
0 Â 0% +Inf% 2 Â 0% TOTAL sched_debug.cpu#8.nr_running
0 Â 0% +Inf% 1 Â 0% TOTAL sched_debug.cfs_rq[47]:/.nr_running
1026500 Â 4% +606.8% 7255532 Â 1% TOTAL sched_debug.cfs_rq[47]:/.min_vruntime
21776 Â 3% +2530.7% 572873 Â 1% TOTAL sched_debug.cpu#47.ttwu_local
940486 Â 1% -99.3% 6905 Â46% TOTAL sched_debug.cpu#47.sched_goidle
375887 Â 2% -98.8% 4502 Â40% TOTAL sched_debug.cpu#8.sched_goidle
0 Â 0% +Inf% 2 Â 0% TOTAL sched_debug.cpu#47.nr_running
46 Â43% -94.0% 2 Â41% TOTAL sched_debug.cfs_rq[46]:/.blocked_load_avg
27667 Â 3% +1978.0% 574926 Â 1% TOTAL sched_debug.cpu#8.ttwu_local
6 Â34% +370.0% 28 Â 1% TOTAL sched_debug.cfs_rq[46]:/.runnable_load_avg
0 Â 0% +Inf% 1 Â 0% TOTAL sched_debug.cfs_rq[46]:/.nr_running
0 Â 0% +Inf% 1 Â 0% TOTAL sched_debug.cfs_rq[8]:/.nr_running
973194 Â 3% +646.5% 7264433 Â 1% TOTAL sched_debug.cfs_rq[46]:/.min_vruntime
0 Â 0% +Inf% 2 Â 0% TOTAL sched_debug.cpu#9.nr_running
22161 Â 2% +2482.1% 572230 Â 1% TOTAL sched_debug.cpu#46.ttwu_local
4 Â15% +525.0% 30 Â 2% TOTAL sched_debug.cpu#46.cpu_load[4]
0 Â 0% +Inf% 2 Â 0% TOTAL sched_debug.cpu#46.nr_running
0 Â 0% +Inf% 1 Â 0% TOTAL sched_debug.cfs_rq[45]:/.nr_running
359971 Â 2% -99.1% 3085 Â41% TOTAL sched_debug.cpu#9.sched_goidle
1018595 Â 4% +611.1% 7243421 Â 1% TOTAL sched_debug.cfs_rq[45]:/.min_vruntime
27131 Â 2% +2017.8% 574580 Â 1% TOTAL sched_debug.cpu#9.ttwu_local
134404 Â31% +4758.2% 6529609 Â 8% TOTAL sched_debug.cfs_rq[9]:/.MIN_vruntime
22294 Â 4% +2467.1% 572326 Â 1% TOTAL sched_debug.cpu#45.ttwu_local
134404 Â31% +4758.2% 6529609 Â 8% TOTAL sched_debug.cfs_rq[9]:/.max_vruntime
0 Â 0% +Inf% 1 Â 0% TOTAL sched_debug.cfs_rq[9]:/.nr_running
5 Â22% +455.6% 30 Â 2% TOTAL sched_debug.cpu#45.cpu_load[4]
0 Â 0% +Inf% 2 Â18% TOTAL sched_debug.cpu#45.nr_running
0 Â 0% +Inf% 1 Â 0% TOTAL sched_debug.cfs_rq[44]:/.nr_running
0 Â 0% +Inf% 2 Â 0% TOTAL sched_debug.cpu#10.nr_running
360383 Â 1% -98.8% 4421 Â45% TOTAL sched_debug.cpu#10.sched_goidle
27338 Â 3% +2000.0% 574125 Â 1% TOTAL sched_debug.cpu#10.ttwu_local
1021430 Â 4% +609.5% 7247197 Â 1% TOTAL sched_debug.cfs_rq[44]:/.min_vruntime
231918 Â24% +2549.3% 6144271 Â 8% TOTAL sched_debug.cfs_rq[10]:/.MIN_vruntime
231918 Â24% +2549.3% 6144271 Â 8% TOTAL sched_debug.cfs_rq[10]:/.max_vruntime
0 Â 0% +Inf% 1 Â 0% TOTAL sched_debug.cfs_rq[10]:/.nr_running
22562 Â 3% +2442.1% 573549 Â 1% TOTAL sched_debug.cpu#44.ttwu_local
5 Â25% +492.0% 29 Â 1% TOTAL sched_debug.cpu#44.cpu_load[4]
0 Â 0% +Inf% 2 Â 0% TOTAL sched_debug.cpu#11.nr_running
0 Â 0% +Inf% 2 Â 0% TOTAL sched_debug.cpu#44.nr_running
0 Â 0% +Inf% 1 Â 0% TOTAL sched_debug.cfs_rq[43]:/.nr_running
1019742 Â 2% +614.0% 7280699 Â 1% TOTAL sched_debug.cfs_rq[43]:/.min_vruntime
366743 Â 3% -98.8% 4544 Â48% TOTAL sched_debug.cpu#11.sched_goidle
22781 Â 3% +2411.2% 572090 Â 1% TOTAL sched_debug.cpu#43.ttwu_local
27283 Â 5% +1994.9% 571569 Â 1% TOTAL sched_debug.cpu#11.ttwu_local
0 Â 0% +Inf% 2 Â 0% TOTAL sched_debug.cpu#43.nr_running
0 Â 0% +Inf% 1 Â 0% TOTAL sched_debug.cfs_rq[11]:/.nr_running
0 Â 0% +Inf% 1 Â 0% TOTAL sched_debug.cfs_rq[42]:/.nr_running
0 Â 0% +Inf% 2 Â 0% TOTAL sched_debug.cpu#12.nr_running
1013625 Â 3% +610.3% 7199592 Â 1% TOTAL sched_debug.cfs_rq[42]:/.min_vruntime
23285 Â 2% +2357.3% 572199 Â 1% TOTAL sched_debug.cpu#42.ttwu_local
0 Â 0% +Inf% 2 Â 0% TOTAL sched_debug.cpu#42.nr_running
0 Â 0% +Inf% 1 Â 0% TOTAL sched_debug.cfs_rq[41]:/.nr_running
363201 Â 5% -99.2% 3010 Â32% TOTAL sched_debug.cpu#12.sched_goidle
1026049 Â 2% +605.1% 7234511 Â 2% TOTAL sched_debug.cfs_rq[41]:/.min_vruntime
26558 Â 1% +2059.8% 573625 Â 1% TOTAL sched_debug.cpu#12.ttwu_local
24712 Â 3% +2225.8% 574772 Â 1% TOTAL sched_debug.cpu#41.ttwu_local
0 Â 0% +Inf% 1 Â 0% TOTAL sched_debug.cfs_rq[12]:/.nr_running
908047 Â 0% -99.3% 6771 Â46% TOTAL sched_debug.cpu#41.sched_goidle
0 Â 0% +Inf% 2 Â 0% TOTAL sched_debug.cpu#41.nr_running
0 Â 0% +Inf% 1 Â 0% TOTAL sched_debug.cfs_rq[40]:/.nr_running
0 Â 0% +Inf% 2 Â 0% TOTAL sched_debug.cpu#13.nr_running
1045087 Â 1% +592.6% 7238603 Â 1% TOTAL sched_debug.cfs_rq[40]:/.min_vruntime
23905 Â 4% +2300.9% 573931 Â 1% TOTAL sched_debug.cpu#40.ttwu_local
362073 Â 4% -99.0% 3783 Â43% TOTAL sched_debug.cpu#13.sched_goidle
6 Â45% +383.3% 29 Â 0% TOTAL sched_debug.cpu#40.cpu_load[4]
27170 Â 6% +2006.6% 572362 Â 1% TOTAL sched_debug.cpu#13.ttwu_local
0 Â 0% +Inf% 2 Â 0% TOTAL sched_debug.cpu#40.nr_running
0 Â 0% +Inf% 1 Â 0% TOTAL sched_debug.cfs_rq[13]:/.nr_running
0 Â 0% +Inf% 1 Â 0% TOTAL sched_debug.cfs_rq[39]:/.nr_running
956859 Â 2% +664.8% 7318047 Â 1% TOTAL sched_debug.cfs_rq[39]:/.min_vruntime
0 Â 0% +Inf% 2 Â 0% TOTAL sched_debug.cpu#14.nr_running
22156 Â 6% +2515.2% 579425 Â 1% TOTAL sched_debug.cpu#39.ttwu_local
5 Â20% +400.0% 29 Â 2% TOTAL sched_debug.cpu#39.cpu_load[4]
0 Â 0% +Inf% 2 Â 0% TOTAL sched_debug.cpu#39.nr_running
342979 Â 2% -99.1% 3153 Â32% TOTAL sched_debug.cpu#14.sched_goidle
0 Â 0% +Inf% 1 Â 0% TOTAL sched_debug.cfs_rq[38]:/.nr_running
25831 Â 2% +2108.8% 570574 Â 1% TOTAL sched_debug.cpu#14.ttwu_local
0 Â 0% +Inf% 1 Â 0% TOTAL sched_debug.cfs_rq[14]:/.nr_running
998185 Â 4% +630.6% 7292251 Â 1% TOTAL sched_debug.cfs_rq[38]:/.min_vruntime
21969 Â 3% +2536.5% 579224 Â 1% TOTAL sched_debug.cpu#38.ttwu_local
0 Â 0% +Inf% 2 Â18% TOTAL sched_debug.cpu#15.nr_running
0 Â 0% +Inf% 2 Â 0% TOTAL sched_debug.cpu#38.nr_running
0 Â 0% +Inf% 1 Â 0% TOTAL sched_debug.cfs_rq[37]:/.nr_running
8 Â22% +7077.3% 631 Â47% TOTAL sched_debug.cfs_rq[37]:/.nr_spread_over
976930 Â 2% +653.4% 7359797 Â 1% TOTAL sched_debug.cfs_rq[37]:/.min_vruntime
365758 Â 2% -98.9% 3877 Â47% TOTAL sched_debug.cpu#15.sched_goidle
22211 Â 5% +2503.0% 578152 Â 0% TOTAL sched_debug.cpu#37.ttwu_local
25988 Â 4% +2107.9% 573784 Â 1% TOTAL sched_debug.cpu#15.ttwu_local
4 Â23% +559.1% 29 Â 2% TOTAL sched_debug.cpu#37.cpu_load[4]
0 Â 0% +Inf% 1 Â 0% TOTAL sched_debug.cfs_rq[15]:/.nr_running
5 Â14% +442.3% 28 Â 2% TOTAL sched_debug.cpu#37.cpu_load[3]
5 Â 9% +414.8% 27 Â 2% TOTAL sched_debug.cpu#37.cpu_load[2]
0 Â 0% +Inf% 2 Â 0% TOTAL sched_debug.cpu#37.nr_running
0 Â 0% +Inf% 1 Â 0% TOTAL sched_debug.cfs_rq[36]:/.nr_running
0 Â 0% +Inf% 2 Â 0% TOTAL sched_debug.cpu#16.nr_running
10 Â17% +6420.8% 691 Â26% TOTAL sched_debug.cfs_rq[36]:/.nr_spread_over
951225 Â 5% +672.6% 7349084 Â 1% TOTAL sched_debug.cfs_rq[36]:/.min_vruntime
22668 Â 5% +2450.8% 578229 Â 1% TOTAL sched_debug.cpu#36.ttwu_local
0 Â 0% +Inf% 2 Â 0% TOTAL sched_debug.cpu#36.nr_running
25231 Â 3% +2177.7% 574692 Â 1% TOTAL sched_debug.cpu#16.ttwu_local
0 Â 0% +Inf% 1 Â 0% TOTAL sched_debug.cfs_rq[35]:/.nr_running
0 Â 0% +Inf% 1 Â 0% TOTAL sched_debug.cfs_rq[16]:/.nr_running
969215 Â 3% +658.8% 7354084 Â 2% TOTAL sched_debug.cfs_rq[35]:/.min_vruntime
0 Â 0% +Inf% 2 Â 0% TOTAL sched_debug.cpu#17.nr_running
22948 Â 6% +2424.6% 579345 Â 1% TOTAL sched_debug.cpu#35.ttwu_local
4 Â44% +615.0% 28 Â 3% TOTAL sched_debug.cpu#35.cpu_load[4]
4 Â48% +518.2% 27 Â 4% TOTAL sched_debug.cpu#35.cpu_load[0]
0 Â 0% +Inf% 2 Â 0% TOTAL sched_debug.cpu#35.nr_running
0 Â 0% +Inf% 1 Â 0% TOTAL sched_debug.cfs_rq[34]:/.nr_running
24619 Â 2% +2234.0% 574630 Â 1% TOTAL sched_debug.cpu#17.ttwu_local
985770 Â 4% +646.0% 7353877 Â 1% TOTAL sched_debug.cfs_rq[34]:/.min_vruntime
0 Â 0% +Inf% 1 Â 0% TOTAL sched_debug.cfs_rq[17]:/.nr_running
23297 Â 6% +2389.1% 579887 Â 1% TOTAL sched_debug.cpu#34.ttwu_local
4 Â42% +534.8% 29 Â 2% TOTAL sched_debug.cpu#34.cpu_load[4]
0 Â 0% +Inf% 2 Â 0% TOTAL sched_debug.cpu#18.nr_running
5 Â41% +429.6% 28 Â 3% TOTAL sched_debug.cpu#34.cpu_load[3]
0 Â 0% +Inf% 2 Â 0% TOTAL sched_debug.cpu#34.nr_running
0 Â 0% +Inf% 1 Â 0% TOTAL sched_debug.cfs_rq[33]:/.nr_running
11 Â39% +7127.3% 795 Â29% TOTAL sched_debug.cfs_rq[33]:/.nr_spread_over
952580 Â 1% +670.8% 7342099 Â 1% TOTAL sched_debug.cfs_rq[33]:/.min_vruntime
24246 Â 4% +2265.3% 573496 Â 1% TOTAL sched_debug.cpu#18.ttwu_local
24645 Â 6% +2249.7% 579112 Â 1% TOTAL sched_debug.cpu#33.ttwu_local
933626 Â 1% -99.3% 6613 Â47% TOTAL sched_debug.cpu#33.sched_goidle
0 Â 0% +Inf% 1 Â 0% TOTAL sched_debug.cfs_rq[18]:/.nr_running
0 Â 0% +Inf% 2 Â 0% TOTAL sched_debug.cpu#33.nr_running
0 Â 0% +Inf% 1 Â 0% TOTAL sched_debug.cfs_rq[32]:/.nr_running
0 Â 0% +Inf% 2 Â 0% TOTAL sched_debug.cpu#19.nr_running
950196 Â 2% +670.3% 7319391 Â 2% TOTAL sched_debug.cfs_rq[32]:/.min_vruntime
25033 Â 5% +2211.6% 578682 Â 1% TOTAL sched_debug.cpu#32.ttwu_local
0 Â 0% +Inf% 2 Â 0% TOTAL sched_debug.cpu#32.nr_running
0 Â 0% +Inf% 1 Â 0% TOTAL sched_debug.cfs_rq[31]:/.nr_running
23931 Â 3% +2304.2% 575367 Â 1% TOTAL sched_debug.cpu#19.ttwu_local
23792 Â 1% +2309.0% 573173 Â 1% TOTAL sched_debug.cpu#31.ttwu_local
0 Â 0% +Inf% 1 Â 0% TOTAL sched_debug.cfs_rq[19]:/.nr_running
0 Â 0% +Inf% 2 Â 0% TOTAL sched_debug.cpu#31.nr_running
0 Â 0% +Inf% 1 Â 0% TOTAL sched_debug.cfs_rq[30]:/.nr_running
45.19 Â 0% -100.0% 0.00 Â 0% TOTAL perf-profile.cpu-cycles.__crc32c_le.chksum_update.crypto_shash_update.crc32c.sctp_csum_update
0 Â 0% +Inf% 2 Â 0% TOTAL sched_debug.cpu#20.nr_running
23990 Â 2% +2299.0% 575544 Â 1% TOTAL sched_debug.cpu#30.ttwu_local
0 Â 0% +Inf% 2 Â 0% TOTAL sched_debug.cpu#30.nr_running
0 Â 0% +Inf% 1 Â 0% TOTAL sched_debug.cfs_rq[29]:/.nr_running
333744 Â 2% -99.1% 3032 Â30% TOTAL sched_debug.cpu#20.sched_goidle
23769 Â 2% +2321.8% 575648 Â 1% TOTAL sched_debug.cpu#20.ttwu_local
23963 Â 2% +2289.1% 572507 Â 1% TOTAL sched_debug.cpu#29.ttwu_local
0 Â 0% +Inf% 1 Â 0% TOTAL sched_debug.cfs_rq[20]:/.nr_running
0 Â 0% +Inf% 2 Â 0% TOTAL sched_debug.cpu#29.nr_running
0 Â 0% +Inf% 1 Â 0% TOTAL sched_debug.cfs_rq[28]:/.nr_running
16 Â16% +14806.2% 2385 Â48% TOTAL sched_debug.cfs_rq[28]:/.nr_spread_over
0 Â 0% +Inf% 2 Â 0% TOTAL sched_debug.cpu#21.nr_running
23955 Â 2% +2294.9% 573707 Â 1% TOTAL sched_debug.cpu#28.ttwu_local
0 Â 0% +Inf% 2 Â 0% TOTAL sched_debug.cpu#28.nr_running
0 Â 0% +Inf% 1 Â 0% TOTAL sched_debug.cfs_rq[27]:/.nr_running
200672 Â41% +3130.2% 6482112 Â11% TOTAL sched_debug.cfs_rq[27]:/.max_vruntime
333344 Â 3% -99.4% 2160 Â44% TOTAL sched_debug.cpu#21.sched_goidle
200672 Â41% +3130.2% 6482112 Â11% TOTAL sched_debug.cfs_rq[27]:/.MIN_vruntime
24388 Â 2% +2254.3% 574189 Â 1% TOTAL sched_debug.cpu#27.ttwu_local
23766 Â 2% +2316.8% 574378 Â 1% TOTAL sched_debug.cpu#21.ttwu_local
0 Â 0% +Inf% 2 Â 0% TOTAL sched_debug.cpu#27.nr_running
16 Â23% +8230.9% 1349 Â31% TOTAL sched_debug.cfs_rq[21]:/.nr_spread_over
0 Â 0% +Inf% 1 Â 0% TOTAL sched_debug.cfs_rq[21]:/.nr_running
0 Â 0% +Inf% 1 Â 0% TOTAL sched_debug.cfs_rq[26]:/.nr_running
0 Â 0% +Inf% 2 Â 0% TOTAL sched_debug.cpu#22.nr_running
24877 Â 2% +2200.2% 572209 Â 1% TOTAL sched_debug.cpu#26.ttwu_local
333909 Â 4% -98.5% 4876 Â31% TOTAL sched_debug.cpu#26.sched_goidle
0 Â 0% +Inf% 2 Â 0% TOTAL sched_debug.cpu#26.nr_running
0 Â 0% +Inf% 1 Â 0% TOTAL sched_debug.cfs_rq[25]:/.nr_running
21 Â22% +8784.9% 1883 Â35% TOTAL sched_debug.cfs_rq[25]:/.nr_spread_over
24697 Â 4% +2214.3% 571582 Â 1% TOTAL sched_debug.cpu#25.ttwu_local
24175 Â 5% +2283.3% 576162 Â 1% TOTAL sched_debug.cpu#22.ttwu_local
0 Â 0% +Inf% 2 Â 0% TOTAL sched_debug.cpu#25.nr_running
0 Â 0% +Inf% 1 Â 0% TOTAL sched_debug.cfs_rq[22]:/.nr_running
0 Â 0% +Inf% 1 Â 0% TOTAL sched_debug.cfs_rq[24]:/.nr_running
0 Â 0% +Inf% 2 Â 0% TOTAL sched_debug.cpu#23.nr_running
25076 Â 2% +2194.8% 575457 Â 1% TOTAL sched_debug.cpu#24.ttwu_local
336969 Â 2% -99.1% 3193 Â49% TOTAL sched_debug.cpu#24.sched_goidle
0 Â 0% +Inf% 2 Â 0% TOTAL sched_debug.cpu#24.nr_running
0 Â 0% +Inf% 1 Â 0% TOTAL sched_debug.cfs_rq[23]:/.nr_running
420529 Â14% -80.8% 80883 Â 6% TOTAL sched_debug.cpu#24.avg_idle
35 Â 1% +394.3% 174 Â 0% TOTAL vmstat.procs.r
320932 Â15% -77.7% 71461 Â 8% TOTAL sched_debug.cpu#26.avg_idle
295750 Â21% -76.1% 70793 Â17% TOTAL sched_debug.cpu#2.avg_idle
623387 Â 0% -79.5% 127567 Â 1% TOTAL softirqs.SCHED
6 Â42% +320.6% 28 Â 1% TOTAL sched_debug.cpu#40.cpu_load[3]
6 Â33% +376.7% 28 Â 3% TOTAL sched_debug.cpu#51.cpu_load[3]
6 Â27% +358.1% 28 Â 4% TOTAL sched_debug.cpu#62.cpu_load[3]
6 Â27% +358.1% 28 Â 2% TOTAL sched_debug.cpu#59.cpu_load[4]
5 Â20% +403.4% 29 Â 2% TOTAL sched_debug.cpu#44.cpu_load[3]
5 Â25% +437.0% 29 Â 2% TOTAL sched_debug.cpu#38.cpu_load[4]
6 Â21% +361.3% 28 Â 3% TOTAL sched_debug.cpu#46.cpu_load[1]
6 Â10% +380.0% 28 Â 4% TOTAL sched_debug.cpu#46.cpu_load[2]
6 Â18% +364.5% 28 Â 2% TOTAL sched_debug.cpu#44.cpu_load[2]
6 Â32% +364.5% 28 Â 1% TOTAL sched_debug.cpu#36.cpu_load[4]
6 Â21% +358.1% 28 Â 2% TOTAL sched_debug.cpu#44.cpu_load[1]
6 Â29% +354.8% 28 Â 3% TOTAL sched_debug.cpu#44.cpu_load[0]
6 Â44% +361.3% 28 Â 1% TOTAL sched_debug.cpu#32.cpu_load[4]
6 Â33% +373.3% 28 Â 2% TOTAL sched_debug.cpu#55.cpu_load[3]
6 Â28% +323.5% 28 Â 4% TOTAL sched_debug.cpu#46.cpu_load[0]
5 Â 6% +406.9% 29 Â 3% TOTAL sched_debug.cpu#46.cpu_load[3]
340647 Â14% -77.1% 77839 Â14% TOTAL sched_debug.cpu#25.avg_idle
6 Â18% +376.7% 28 Â 2% TOTAL sched_debug.cpu#38.cpu_load[3]
4 Â37% +508.7% 28 Â 2% TOTAL sched_debug.cpu#35.cpu_load[3]
6 Â29% +314.7% 28 Â 4% TOTAL sched_debug.cpu#56.cpu_load[0]
6 Â15% +346.9% 28 Â 2% TOTAL sched_debug.cpu#63.cpu_load[3]
7 Â47% +305.7% 28 Â 1% TOTAL sched_debug.cpu#32.cpu_load[3]
6 Â17% +317.6% 28 Â 3% TOTAL sched_debug.cpu#52.cpu_load[4]
6 Â15% +343.8% 28 Â 3% TOTAL sched_debug.cpu#56.cpu_load[2]
6 Â37% +337.5% 28 Â 3% TOTAL sched_debug.cpu#55.cpu_load[2]
6 Â23% +343.8% 28 Â 1% TOTAL sched_debug.cpu#39.cpu_load[3]
6 Â29% +351.6% 28 Â 5% TOTAL sched_debug.cpu#34.cpu_load[1]
6 Â31% +354.8% 28 Â 4% TOTAL sched_debug.cpu#34.cpu_load[2]
6 Â42% +358.1% 28 Â 5% TOTAL sched_debug.cpu#53.cpu_load[1]
6 Â40% +373.3% 28 Â 3% TOTAL sched_debug.cpu#53.cpu_load[2]
6 Â28% +327.3% 28 Â 1% TOTAL sched_debug.cpu#36.cpu_load[3]
6 Â20% +327.3% 28 Â 2% TOTAL sched_debug.cpu#56.cpu_load[1]
7 Â29% +297.1% 27 Â 1% TOTAL sched_debug.cpu#50.cpu_load[2]
7 Â44% +265.8% 27 Â 1% TOTAL sched_debug.cpu#32.cpu_load[2]
317802 Â 9% -77.7% 70785 Â 9% TOTAL sched_debug.cpu#4.avg_idle
346844 Â10% -77.6% 77616 Â 8% TOTAL sched_debug.cpu#16.avg_idle
4 Â40% +475.0% 27 Â 2% TOTAL sched_debug.cpu#35.cpu_load[1]
5 Â35% +452.0% 27 Â 2% TOTAL sched_debug.cfs_rq[48]:/.runnable_load_avg
4 Â40% +470.8% 27 Â 2% TOTAL sched_debug.cpu#35.cpu_load[2]
5 Â18% +385.7% 27 Â 4% TOTAL sched_debug.cpu#37.cpu_load[0]
5 Â 6% +369.0% 27 Â 2% TOTAL sched_debug.cpu#37.cpu_load[1]
366939 Â13% -78.7% 78115 Â 9% TOTAL sched_debug.cpu#30.avg_idle
1023 Â37% +323.5% 4333 Â 2% TOTAL sched_debug.cpu#48.curr->pid
327585 Â 9% -77.2% 74734 Â11% TOTAL sched_debug.cpu#17.avg_idle
12595 Â 0% -76.7% 2929 Â 0% TOTAL uptime.idle
338343 Â14% -78.4% 73057 Â 5% TOTAL sched_debug.cpu#31.avg_idle
6 Â37% +353.1% 29 Â 4% TOTAL sched_debug.cpu#58.cpu_load[4]
7 Â23% +322.9% 29 Â 3% TOTAL sched_debug.cpu#47.cpu_load[4]
7 Â28% +316.7% 30 Â 2% TOTAL sched_debug.cpu#42.cpu_load[4]
316565 Â15% -74.7% 80204 Â11% TOTAL sched_debug.cpu#23.avg_idle
295009 Â22% -75.0% 73750 Â 9% TOTAL sched_debug.cpu#22.avg_idle
19299589 Â 1% +316.6% 80411460 Â 1% TOTAL proc-vmstat.pgalloc_dma32
6 Â26% +311.8% 28 Â 4% TOTAL sched_debug.cpu#62.cpu_load[2]
6 Â21% +393.3% 29 Â 2% TOTAL sched_debug.cpu#45.cpu_load[3]
7 Â18% +311.4% 28 Â 2% TOTAL sched_debug.cpu#45.cpu_load[2]
7 Â10% +286.8% 29 Â 1% TOTAL sched_debug.cfs_rq[45]:/.runnable_load_avg
8 Â27% +234.9% 28 Â 2% TOTAL sched_debug.cpu#42.cpu_load[1]
6 Â19% +323.5% 28 Â 4% TOTAL sched_debug.cpu#60.cpu_load[4]
8 Â26% +265.0% 29 Â 2% TOTAL sched_debug.cpu#42.cpu_load[3]
7 Â13% +297.2% 28 Â 5% TOTAL sched_debug.cfs_rq[38]:/.runnable_load_avg
8 Â27% +234.9% 28 Â 2% TOTAL sched_debug.cpu#42.cpu_load[2]
6 Â40% +364.5% 28 Â 4% TOTAL sched_debug.cpu#54.cpu_load[4]
306463 Â16% -72.5% 84417 Â11% TOTAL sched_debug.cpu#7.avg_idle
294085 Â13% -74.4% 75243 Â11% TOTAL sched_debug.cpu#58.avg_idle
295178 Â 8% -74.8% 74330 Â13% TOTAL sched_debug.cpu#19.avg_idle
310735 Â12% -74.4% 79628 Â23% TOTAL sched_debug.cpu#12.avg_idle
7 Â21% +275.7% 27 Â 3% TOTAL sched_debug.cpu#59.cpu_load[3]
7 Â30% +277.8% 27 Â 5% TOTAL sched_debug.cpu#62.cpu_load[1]
7 Â40% +300.0% 28 Â 0% TOTAL sched_debug.cpu#40.cpu_load[2]
6 Â34% +314.7% 28 Â 2% TOTAL sched_debug.cpu#51.cpu_load[2]
7 Â49% +280.6% 27 Â 4% TOTAL sched_debug.cfs_rq[40]:/.runnable_load_avg
7 Â34% +297.1% 27 Â 4% TOTAL sched_debug.cpu#55.cpu_load[1]
8 Â20% +245.0% 27 Â 1% TOTAL sched_debug.cpu#36.cpu_load[1]
7 Â23% +286.1% 27 Â 1% TOTAL sched_debug.cpu#36.cpu_load[2]
7 Â16% +288.9% 28 Â 3% TOTAL sched_debug.cpu#52.cpu_load[3]
7 Â13% +278.4% 28 Â 2% TOTAL sched_debug.cpu#63.cpu_load[2]
7 Â23% +288.9% 28 Â 2% TOTAL sched_debug.cpu#39.cpu_load[2]
7 Â28% +271.1% 28 Â 1% TOTAL sched_debug.cpu#39.cpu_load[1]
7 Â18% +288.9% 28 Â 5% TOTAL sched_debug.cfs_rq[63]:/.runnable_load_avg
6 Â28% +302.9% 27 Â 2% TOTAL sched_debug.cfs_rq[37]:/.runnable_load_avg
7 Â30% +255.3% 27 Â 6% TOTAL sched_debug.cpu#38.cpu_load[0]
7 Â25% +283.3% 27 Â 4% TOTAL sched_debug.cpu#38.cpu_load[1]
6 Â22% +324.2% 28 Â 3% TOTAL sched_debug.cpu#38.cpu_load[2]
6 Â25% +308.8% 27 Â 1% TOTAL sched_debug.cfs_rq[56]:/.runnable_load_avg
7 Â38% +267.6% 27 Â 5% TOTAL sched_debug.cpu#62.cpu_load[0]
317966 Â13% -74.2% 82042 Â 9% TOTAL sched_debug.cpu#27.avg_idle
296746 Â16% -74.7% 75078 Â 9% TOTAL sched_debug.cpu#14.avg_idle
298670 Â15% -73.3% 79831 Â22% TOTAL sched_debug.cpu#6.avg_idle
6 Â32% +302.9% 27 Â 5% TOTAL sched_debug.cpu#34.cpu_load[0]
7 Â10% +260.5% 27 Â 3% TOTAL sched_debug.cpu#52.cpu_load[2]
6 Â23% +297.1% 27 Â 0% TOTAL sched_debug.cfs_rq[61]:/.runnable_load_avg
8 Â39% +216.3% 27 Â 1% TOTAL sched_debug.cpu#32.cpu_load[1]
7 Â30% +252.6% 26 Â 2% TOTAL sched_debug.cpu#50.cpu_load[0]
6 Â38% +302.9% 27 Â 1% TOTAL sched_debug.cpu#40.cpu_load[1]
7 Â16% +283.3% 27 Â 2% TOTAL sched_debug.cpu#63.cpu_load[1]
7 Â27% +255.3% 27 Â 2% TOTAL sched_debug.cpu#50.cpu_load[1]
6 Â44% +335.5% 27 Â 3% TOTAL sched_debug.cfs_rq[35]:/.runnable_load_avg
6 Â39% +300.0% 27 Â 2% TOTAL sched_debug.cpu#40.cpu_load[0]
294207 Â16% -73.9% 76789 Â18% TOTAL sched_debug.cpu#20.avg_idle
1454 Â47% +193.6% 4270 Â 1% TOTAL sched_debug.cpu#32.curr->pid
274384 Â28% -64.9% 96227 Â33% TOTAL sched_debug.cpu#33.avg_idle
287321 Â 8% -73.1% 77157 Â10% TOTAL sched_debug.cpu#3.avg_idle
7 Â10% +255.3% 27 Â 4% TOTAL sched_debug.cpu#52.cpu_load[1]
1284 Â29% +228.6% 4222 Â 0% TOTAL sched_debug.cpu#43.curr->pid
349008 Â15% -74.2% 90084 Â 6% TOTAL sched_debug.cpu#21.avg_idle
9 Â11% +230.6% 32 Â 4% TOTAL sched_debug.cpu#49.cpu_load[4]
7 Â30% +281.6% 29 Â 2% TOTAL sched_debug.cpu#43.cpu_load[2]
7 Â29% +297.3% 29 Â 2% TOTAL sched_debug.cpu#43.cpu_load[3]
6 Â38% +323.5% 28 Â 4% TOTAL sched_debug.cpu#54.cpu_load[3]
8 Â29% +223.3% 27 Â10% TOTAL sched_debug.cfs_rq[58]:/.runnable_load_avg
7 Â18% +271.8% 29 Â 2% TOTAL sched_debug.cpu#47.cpu_load[3]
7 Â31% +322.9% 29 Â 2% TOTAL sched_debug.cpu#43.cpu_load[4]
7 Â32% +283.8% 28 Â 4% TOTAL sched_debug.cpu#58.cpu_load[3]
9 Â25% +222.2% 29 Â 3% TOTAL sched_debug.cpu#42.cpu_load[0]
291808 Â12% -72.2% 81207 Â12% TOTAL sched_debug.cpu#0.avg_idle
12900 Â 6% +248.9% 45016 Â 0% TOTAL sched_debug.cfs_rq[61]:/.avg->runnable_avg_sum
282 Â 6% +247.8% 982 Â 0% TOTAL sched_debug.cfs_rq[61]:/.tg_runnable_contrib
307360 Â20% -75.2% 76165 Â 7% TOTAL sched_debug.cpu#13.avg_idle
1213 Â36% +252.3% 4273 Â 0% TOTAL sched_debug.cpu#35.curr->pid
2108032 Â 5% -71.2% 607806 Â 1% TOTAL sched_debug.cpu#34.sched_count
2130952 Â 9% -71.4% 609735 Â 1% TOTAL sched_debug.cpu#33.sched_count
7 Â39% +268.4% 28 Â 3% TOTAL sched_debug.cpu#54.cpu_load[0]
7 Â39% +278.4% 28 Â 3% TOTAL sched_debug.cfs_rq[53]:/.runnable_load_avg
8 Â23% +223.3% 27 Â 2% TOTAL sched_debug.cpu#36.cpu_load[0]
7 Â20% +286.1% 27 Â 4% TOTAL sched_debug.cpu#60.cpu_load[3]
7 Â21% +265.8% 27 Â 4% TOTAL sched_debug.cfs_rq[50]:/.runnable_load_avg
7 Â31% +276.3% 28 Â 2% TOTAL sched_debug.cpu#45.cpu_load[0]
8 Â24% +226.2% 27 Â 2% TOTAL sched_debug.cfs_rq[55]:/.runnable_load_avg
6 Â36% +320.6% 28 Â 4% TOTAL sched_debug.cfs_rq[44]:/.runnable_load_avg
7 Â26% +281.1% 28 Â 2% TOTAL sched_debug.cpu#51.cpu_load[1]
7 Â27% +265.8% 27 Â 4% TOTAL sched_debug.cpu#51.cpu_load[0]
8 Â27% +233.3% 28 Â 2% TOTAL sched_debug.cpu#39.cpu_load[0]
7 Â21% +276.3% 28 Â 2% TOTAL sched_debug.cpu#45.cpu_load[1]
293 Â 8% +234.7% 980 Â 0% TOTAL sched_debug.cfs_rq[56]:/.tg_runnable_contrib
2074224 Â 3% -70.6% 609675 Â 2% TOTAL sched_debug.cpu#63.sched_count
2127419 Â 6% -71.7% 601604 Â 1% TOTAL sched_debug.cpu#46.sched_count
2115949 Â 3% -65.2% 737051 Â35% TOTAL sched_debug.cpu#36.sched_count
13414 Â 8% +235.7% 45027 Â 0% TOTAL sched_debug.cfs_rq[56]:/.avg->runnable_avg_sum
267808 Â13% -72.1% 74782 Â10% TOTAL sched_debug.cpu#5.avg_idle
2069493 Â 8% -68.4% 654987 Â18% TOTAL sched_debug.cpu#45.sched_count
14044 Â11% +220.7% 45039 Â 0% TOTAL sched_debug.cfs_rq[55]:/.avg->runnable_avg_sum
2053686 Â 4% -70.7% 602700 Â 2% TOTAL sched_debug.cpu#52.sched_count
289450 Â13% -68.8% 90401 Â15% TOTAL sched_debug.cpu#1.avg_idle
307 Â11% +219.5% 980 Â 0% TOTAL sched_debug.cfs_rq[55]:/.tg_runnable_contrib
2002864 Â 8% -69.9% 602844 Â 1% TOTAL sched_debug.cpu#40.sched_count
2035365 Â 2% -70.3% 603651 Â 1% TOTAL sched_debug.cpu#47.sched_count
9 Â24% +197.8% 26 Â 1% TOTAL sched_debug.cfs_rq[59]:/.runnable_load_avg
8 Â24% +226.8% 26 Â 6% TOTAL sched_debug.cpu#60.cpu_load[0]
8 Â23% +235.0% 26 Â 4% TOTAL sched_debug.cpu#59.cpu_load[1]
7 Â19% +252.6% 26 Â 4% TOTAL sched_debug.cpu#60.cpu_load[1]
8 Â23% +234.1% 27 Â 3% TOTAL sched_debug.cpu#59.cpu_load[2]
8 Â36% +209.1% 27 Â 2% TOTAL sched_debug.cfs_rq[32]:/.runnable_load_avg
7 Â20% +253.8% 27 Â 2% TOTAL sched_debug.cpu#63.cpu_load[0]
8 Â16% +206.8% 27 Â 4% TOTAL sched_debug.cfs_rq[52]:/.runnable_load_avg
7 Â19% +257.9% 27 Â 4% TOTAL sched_debug.cpu#60.cpu_load[2]
2008711 Â 1% -70.0% 602980 Â 1% TOTAL sched_debug.cpu#61.nr_switches
2100707 Â 7% -71.1% 606142 Â 2% TOTAL sched_debug.cpu#53.sched_count
2046800 Â 1% -65.3% 710307 Â28% TOTAL sched_debug.cpu#60.sched_count
2091856 Â 3% -70.6% 614687 Â 3% TOTAL sched_debug.cpu#38.sched_count
2062814 Â 3% -70.7% 604560 Â 2% TOTAL sched_debug.cpu#54.sched_count
13554 Â11% +232.3% 45048 Â 0% TOTAL sched_debug.cfs_rq[35]:/.avg->runnable_avg_sum
296 Â11% +231.2% 982 Â 0% TOTAL sched_debug.cfs_rq[35]:/.tg_runnable_contrib
1474 Â29% +188.2% 4249 Â 0% TOTAL sched_debug.cpu#62.curr->pid
2049101 Â 6% -70.4% 607303 Â 2% TOTAL sched_debug.cpu#41.sched_count
2167538 Â 9% -71.9% 609753 Â 2% TOTAL sched_debug.cpu#39.sched_count
2036680 Â 3% -70.3% 605379 Â 1% TOTAL sched_debug.cpu#51.sched_count
2017924 Â 1% -70.0% 604898 Â 0% TOTAL sched_debug.cpu#59.sched_count
14509 Â13% +210.5% 45046 Â 0% TOTAL sched_debug.cfs_rq[63]:/.avg->runnable_avg_sum
2076058 Â 5% -69.2% 640162 Â10% TOTAL sched_debug.cpu#37.sched_count
1225 Â36% +253.2% 4329 Â 3% TOTAL sched_debug.cpu#53.curr->pid
1148 Â27% +268.3% 4230 Â 1% TOTAL sched_debug.cpu#61.curr->pid
14570 Â13% +209.0% 45024 Â 0% TOTAL sched_debug.cfs_rq[34]:/.avg->runnable_avg_sum
2020274 Â 0% -69.8% 610215 Â 2% TOTAL sched_debug.cpu#62.nr_switches
319 Â13% +208.0% 982 Â 0% TOTAL sched_debug.cfs_rq[34]:/.tg_runnable_contrib
317 Â13% +208.1% 978 Â 0% TOTAL sched_debug.cfs_rq[63]:/.tg_runnable_contrib
2042951 Â 1% -69.7% 618701 Â 2% TOTAL sched_debug.cpu#62.sched_count
2058101 Â 7% -70.8% 600726 Â 1% TOTAL sched_debug.cpu#44.sched_count
1984924 Â 1% -69.5% 604880 Â 0% TOTAL sched_debug.cpu#59.nr_switches
1971315 Â 1% -69.3% 604542 Â 2% TOTAL sched_debug.cpu#54.nr_switches
2014162 Â 6% -65.2% 700760 Â28% TOTAL sched_debug.cpu#42.sched_count
1953776 Â 2% -68.8% 609656 Â 2% TOTAL sched_debug.cpu#63.nr_switches
2055362 Â 7% -65.1% 717767 Â33% TOTAL sched_debug.cpu#43.sched_count
1231 Â29% +249.4% 4303 Â 2% TOTAL sched_debug.cpu#50.curr->pid
313 Â10% +213.7% 982 Â 0% TOTAL sched_debug.cfs_rq[62]:/.tg_runnable_contrib
7 Â13% +255.3% 27 Â 6% TOTAL sched_debug.cpu#52.cpu_load[0]
8 Â31% +202.3% 26 Â 3% TOTAL sched_debug.cpu#32.cpu_load[0]
1991792 Â 1% -69.3% 611077 Â 1% TOTAL sched_debug.cpu#60.nr_switches
14342 Â10% +214.0% 45031 Â 0% TOTAL sched_debug.cfs_rq[62]:/.avg->runnable_avg_sum
1942697 Â 1% -69.0% 602682 Â 2% TOTAL sched_debug.cpu#52.nr_switches
1950644 Â 1% -69.2% 601592 Â 1% TOTAL sched_debug.cpu#46.nr_switches
1992472 Â 1% -61.6% 765257 Â25% TOTAL sched_debug.cpu#50.sched_count
8 Â20% +241.5% 28 Â 4% TOTAL sched_debug.cpu#58.cpu_load[1]
9 Â12% +215.6% 28 Â 2% TOTAL sched_debug.cfs_rq[39]:/.runnable_load_avg
7 Â31% +283.8% 28 Â 2% TOTAL sched_debug.cpu#54.cpu_load[2]
8 Â22% +238.1% 28 Â 2% TOTAL sched_debug.cpu#47.cpu_load[2]
8 Â26% +250.0% 28 Â 4% TOTAL sched_debug.cpu#58.cpu_load[2]
13572 Â12% +232.0% 45056 Â 0% TOTAL sched_debug.cfs_rq[37]:/.avg->runnable_avg_sum
2047201 Â 6% -67.2% 670954 Â18% TOTAL sched_debug.cpu#35.sched_count
297 Â12% +230.8% 983 Â 0% TOTAL sched_debug.cfs_rq[37]:/.tg_runnable_contrib
1955174 Â 2% -69.1% 603605 Â 1% TOTAL sched_debug.cpu#55.nr_switches
1971669 Â 5% -69.1% 608414 Â 1% TOTAL sched_debug.cpu#49.sched_count
1939844 Â 1% -68.8% 605845 Â 1% TOTAL sched_debug.cpu#36.nr_switches
1949899 Â 1% -68.9% 606124 Â 2% TOTAL sched_debug.cpu#53.nr_switches
315 Â12% +211.7% 984 Â 0% TOTAL sched_debug.cfs_rq[51]:/.tg_runnable_contrib
12 Â27% +166.7% 32 Â 6% TOTAL sched_debug.cpu#49.cpu_load[2]
11 Â21% +184.2% 32 Â 5% TOTAL sched_debug.cpu#49.cpu_load[3]
14438 Â12% +212.0% 45043 Â 0% TOTAL sched_debug.cfs_rq[51]:/.avg->runnable_avg_sum
2001341 Â 4% -66.4% 671981 Â17% TOTAL sched_debug.cpu#57.sched_count
303710 Â17% -70.0% 91263 Â13% TOTAL sched_debug.cpu#9.avg_idle
1900887 Â 1% -68.4% 600434 Â 1% TOTAL sched_debug.cpu#45.nr_switches
14272 Â12% +215.8% 45065 Â 0% TOTAL sched_debug.cfs_rq[59]:/.avg->runnable_avg_sum
1937164 Â 0% -68.5% 609385 Â 1% TOTAL sched_debug.cpu#37.nr_switches
1974368 Â 1% -69.1% 609730 Â 2% TOTAL sched_debug.cpu#39.nr_switches
1973848 Â 2% -62.6% 738151 Â35% TOTAL sched_debug.cpu#58.sched_count
1912735 Â 1% -68.5% 603190 Â 2% TOTAL sched_debug.cpu#50.nr_switches
1931643 Â 2% -68.6% 607250 Â 2% TOTAL sched_debug.cpu#58.nr_switches
1377 Â14% +216.7% 4363 Â 1% TOTAL sched_debug.cpu#49.curr->pid
312 Â12% +214.1% 981 Â 0% TOTAL sched_debug.cfs_rq[59]:/.tg_runnable_contrib
1944033 Â 1% -68.9% 605363 Â 1% TOTAL sched_debug.cpu#51.nr_switches
1915699 Â 1% -68.1% 610629 Â 1% TOTAL sched_debug.cpu#35.nr_switches
15047 Â12% +199.1% 45011 Â 0% TOTAL sched_debug.cfs_rq[32]:/.avg->runnable_avg_sum
46418 Â 2% +213.2% 145374 Â 0% TOTAL sched_debug.cfs_rq[62]:/.exec_clock
1924241 Â 2% -68.3% 609789 Â 2% TOTAL sched_debug.cpu#38.nr_switches
1912400 Â 1% -68.4% 603629 Â 1% TOTAL sched_debug.cpu#47.nr_switches
329 Â12% +197.8% 979 Â 0% TOTAL sched_debug.cfs_rq[32]:/.tg_runnable_contrib
1895051 Â 2% -68.3% 600708 Â 1% TOTAL sched_debug.cpu#44.nr_switches
13983 Â14% +222.1% 45044 Â 0% TOTAL sched_debug.cfs_rq[44]:/.avg->runnable_avg_sum
1901704 Â 1% -67.9% 609715 Â 1% TOTAL sched_debug.cpu#33.nr_switches
14514 Â 2% +214.1% 45583 Â 0% TOTAL sched_debug.cfs_rq[33]:/.avg->runnable_avg_sum
306 Â14% +221.0% 982 Â 0% TOTAL sched_debug.cfs_rq[44]:/.tg_runnable_contrib
1926663 Â 0% -68.2% 613403 Â 2% TOTAL sched_debug.cpu#48.sched_count
317 Â 2% +212.6% 993 Â 0% TOTAL sched_debug.cfs_rq[33]:/.tg_runnable_contrib
1203 Â34% +258.3% 4310 Â 1% TOTAL sched_debug.cpu#56.curr->pid
46837 Â 2% +210.4% 145375 Â 0% TOTAL sched_debug.cfs_rq[60]:/.exec_clock
1874381 Â 1% -67.9% 600905 Â 1% TOTAL sched_debug.cpu#42.nr_switches
8 Â16% +238.1% 28 Â 3% TOTAL sched_debug.cfs_rq[47]:/.runnable_load_avg
9 Â12% +200.0% 27 Â 4% TOTAL sched_debug.cpu#58.cpu_load[0]
9 Â24% +213.3% 28 Â 2% TOTAL sched_debug.cpu#47.cpu_load[1]
9 Â28% +200.0% 28 Â 1% TOTAL sched_debug.cpu#47.cpu_load[0]
8 Â34% +239.0% 27 Â 4% TOTAL sched_debug.cpu#55.cpu_load[0]
8 Â15% +222.7% 28 Â 3% TOTAL sched_debug.cfs_rq[36]:/.runnable_load_avg
8 Â23% +246.3% 28 Â 3% TOTAL sched_debug.cpu#43.cpu_load[1]
8 Â22% +250.0% 28 Â 2% TOTAL sched_debug.cfs_rq[43]:/.runnable_load_avg
9 Â37% +213.3% 28 Â 4% TOTAL sched_debug.cfs_rq[34]:/.runnable_load_avg
7 Â32% +268.4% 28 Â 3% TOTAL sched_debug.cpu#54.cpu_load[1]
9 Â37% +208.9% 27 Â 4% TOTAL sched_debug.cfs_rq[51]:/.runnable_load_avg
1916043 Â 2% -68.3% 607993 Â 2% TOTAL sched_debug.cpu#57.nr_switches
265392 Â17% -64.3% 94795 Â22% TOTAL sched_debug.cpu#43.avg_idle
1889707 Â 1% -68.3% 599744 Â 1% TOTAL sched_debug.cpu#43.nr_switches
9 Â29% +227.1% 31 Â 3% TOTAL sched_debug.cpu#33.cpu_load[4]
39 Â46% +117.4% 84 Â32% TOTAL sched_debug.cfs_rq[17]:/.tg_load_contrib
1867119 Â 0% -67.6% 605812 Â 1% TOTAL sched_debug.cpu#49.nr_switches
1889889 Â 1% -67.8% 607783 Â 1% TOTAL sched_debug.cpu#34.nr_switches
47035 Â 4% +209.1% 145403 Â 0% TOTAL sched_debug.cfs_rq[61]:/.exec_clock
1860154 Â 1% -67.6% 602684 Â 2% TOTAL sched_debug.cpu#48.nr_switches
325 Â10% +202.2% 982 Â 0% TOTAL sched_debug.cfs_rq[39]:/.tg_runnable_contrib
66798546 Â 0% +206.8% 2.049e+08 Â 0% TOTAL softirqs.NET_RX
13894 Â13% +224.1% 45029 Â 0% TOTAL sched_debug.cfs_rq[48]:/.avg->runnable_avg_sum
14866 Â10% +202.7% 44994 Â 0% TOTAL sched_debug.cfs_rq[39]:/.avg->runnable_avg_sum
14874 Â11% +207.3% 45713 Â 0% TOTAL sched_debug.cfs_rq[57]:/.avg->runnable_avg_sum
1893238 Â 1% -67.9% 608368 Â 1% TOTAL sched_debug.cpu#56.sched_count
304 Â13% +222.6% 981 Â 0% TOTAL sched_debug.cfs_rq[48]:/.tg_runnable_contrib
1850633 Â 0% -67.2% 606443 Â 2% TOTAL sched_debug.cpu#41.nr_switches
273840 Â22% -69.2% 84378 Â 8% TOTAL sched_debug.cpu#60.avg_idle
325 Â12% +206.1% 996 Â 0% TOTAL sched_debug.cfs_rq[57]:/.tg_runnable_contrib
14957 Â12% +201.5% 45098 Â 0% TOTAL sched_debug.cfs_rq[58]:/.avg->runnable_avg_sum
47628 Â 2% +205.5% 145491 Â 0% TOTAL sched_debug.cfs_rq[51]:/.exec_clock
271141 Â15% -64.6% 96084 Â33% TOTAL sched_debug.cpu#47.avg_idle
47097 Â 2% +208.9% 145500 Â 0% TOTAL sched_debug.cfs_rq[59]:/.exec_clock
1395 Â13% +210.6% 4334 Â 2% TOTAL sched_debug.cpu#55.curr->pid
15193 Â10% +196.8% 45096 Â 0% TOTAL sched_debug.cfs_rq[50]:/.avg->runnable_avg_sum
326 Â12% +200.6% 981 Â 0% TOTAL sched_debug.cfs_rq[58]:/.tg_runnable_contrib
315 Â 8% +211.4% 982 Â 0% TOTAL sched_debug.cfs_rq[38]:/.tg_runnable_contrib
14456 Â 8% +211.3% 45004 Â 0% TOTAL sched_debug.cfs_rq[38]:/.avg->runnable_avg_sum
47896 Â 3% +203.5% 145372 Â 0% TOTAL sched_debug.cfs_rq[54]:/.exec_clock
332 Â10% +195.7% 984 Â 0% TOTAL sched_debug.cfs_rq[50]:/.tg_runnable_contrib
1865184 Â 2% -67.4% 608353 Â 1% TOTAL sched_debug.cpu#56.nr_switches
47722 Â 2% +204.7% 145411 Â 0% TOTAL sched_debug.cfs_rq[55]:/.exec_clock
47334 Â 2% +207.3% 145475 Â 0% TOTAL sched_debug.cfs_rq[52]:/.exec_clock
15320 Â 5% +193.8% 45011 Â 0% TOTAL sched_debug.cfs_rq[46]:/.avg->runnable_avg_sum
48192 Â 1% +201.6% 145362 Â 0% TOTAL sched_debug.cfs_rq[63]:/.exec_clock
48037 Â 1% +202.7% 145411 Â 0% TOTAL sched_debug.cfs_rq[32]:/.exec_clock
1497 Â19% +193.5% 4395 Â 5% TOTAL sched_debug.cpu#52.curr->pid
1846875 Â 0% -67.1% 607481 Â 1% TOTAL sched_debug.cpu#32.nr_switches
287922 Â23% -70.6% 84722 Â16% TOTAL sched_debug.cpu#11.avg_idle
48162 Â 3% +202.0% 145448 Â 0% TOTAL sched_debug.cfs_rq[36]:/.exec_clock
335 Â 5% +192.7% 982 Â 0% TOTAL sched_debug.cfs_rq[46]:/.tg_runnable_contrib
47591 Â 2% +205.6% 145417 Â 0% TOTAL sched_debug.cfs_rq[53]:/.exec_clock
270817 Â22% -66.3% 91254 Â13% TOTAL sched_debug.cpu#63.avg_idle
15169 Â10% +196.9% 45031 Â 0% TOTAL sched_debug.cfs_rq[36]:/.avg->runnable_avg_sum
332 Â10% +196.3% 984 Â 0% TOTAL sched_debug.cfs_rq[36]:/.tg_runnable_contrib
9 Â31% +195.7% 27 Â 3% TOTAL sched_debug.cfs_rq[54]:/.runnable_load_avg
1552 Â27% +175.8% 4281 Â 2% TOTAL sched_debug.cpu#60.curr->pid
49056 Â 1% +199.2% 146799 Â 0% TOTAL sched_debug.cfs_rq[33]:/.exec_clock
1431 Â22% +201.8% 4321 Â 1% TOTAL sched_debug.cpu#34.curr->pid
48712 Â 2% +198.8% 145545 Â 0% TOTAL sched_debug.cfs_rq[50]:/.exec_clock
48311 Â 1% +201.0% 145402 Â 0% TOTAL sched_debug.cfs_rq[39]:/.exec_clock
332 Â 9% +201.0% 999 Â 0% TOTAL sched_debug.cfs_rq[49]:/.tg_runnable_contrib
226926 Â13% -62.6% 84981 Â24% TOTAL sched_debug.cpu#38.avg_idle
15180 Â 9% +201.5% 45763 Â 0% TOTAL sched_debug.cfs_rq[49]:/.avg->runnable_avg_sum
48719 Â 2% +198.5% 145438 Â 0% TOTAL sched_debug.cfs_rq[56]:/.exec_clock
1795031 Â 0% -66.4% 602823 Â 1% TOTAL sched_debug.cpu#40.nr_switches
48496 Â 2% +199.9% 145443 Â 0% TOTAL sched_debug.cfs_rq[58]:/.exec_clock
15260 Â11% +195.2% 45042 Â 0% TOTAL sched_debug.cfs_rq[40]:/.avg->runnable_avg_sum
285472 Â25% -68.5% 89860 Â28% TOTAL sched_debug.cpu#15.avg_idle
334 Â11% +194.3% 982 Â 0% TOTAL sched_debug.cfs_rq[40]:/.tg_runnable_contrib
49933 Â 2% +194.4% 147012 Â 0% TOTAL sched_debug.cfs_rq[57]:/.exec_clock
49101 Â 2% +196.1% 145384 Â 0% TOTAL sched_debug.cfs_rq[37]:/.exec_clock
49003 Â 1% +196.8% 145444 Â 0% TOTAL sched_debug.cfs_rq[35]:/.exec_clock
1390 Â22% +206.7% 4265 Â 0% TOTAL sched_debug.cpu#37.curr->pid
48581 Â 3% +199.4% 145429 Â 0% TOTAL sched_debug.cfs_rq[48]:/.exec_clock
1630 Â25% +160.5% 4247 Â 2% TOTAL sched_debug.cpu#47.curr->pid
49123 Â 2% +196.0% 145404 Â 0% TOTAL sched_debug.cfs_rq[46]:/.exec_clock
266050 Â20% -59.4% 107922 Â24% TOTAL sched_debug.cpu#52.avg_idle
14978 Â10% +200.9% 45075 Â 0% TOTAL sched_debug.cfs_rq[45]:/.avg->runnable_avg_sum
1417 Â21% +204.3% 4312 Â 2% TOTAL sched_debug.cpu#39.curr->pid
12 Â26% +160.7% 31 Â 9% TOTAL sched_debug.cpu#49.cpu_load[1]
327 Â10% +198.7% 978 Â 0% TOTAL sched_debug.cfs_rq[45]:/.tg_runnable_contrib
270121 Â17% -61.7% 103375 Â20% TOTAL sched_debug.cpu#32.avg_idle
50428 Â 2% +189.1% 145774 Â 0% TOTAL sched_debug.cfs_rq[42]:/.exec_clock
1574 Â34% +166.5% 4194 Â 1% TOTAL sched_debug.cpu#45.curr->pid
50663 Â 1% +189.6% 146717 Â 0% TOTAL sched_debug.cfs_rq[49]:/.exec_clock
49273 Â 2% +195.3% 145509 Â 0% TOTAL sched_debug.cfs_rq[34]:/.exec_clock
15310 Â13% +194.0% 45007 Â 0% TOTAL sched_debug.cfs_rq[60]:/.avg->runnable_avg_sum
236078 Â11% -65.9% 80473 Â14% TOTAL sched_debug.cpu#41.avg_idle
49673 Â 2% +192.7% 145415 Â 0% TOTAL sched_debug.cfs_rq[38]:/.exec_clock
8 Â26% +235.0% 26 Â 5% TOTAL sched_debug.cpu#59.cpu_load[0]
50433 Â 2% +188.4% 145447 Â 0% TOTAL sched_debug.cfs_rq[45]:/.exec_clock
15213 Â 5% +196.2% 45068 Â 0% TOTAL sched_debug.cfs_rq[52]:/.avg->runnable_avg_sum
334 Â13% +193.4% 980 Â 0% TOTAL sched_debug.cfs_rq[60]:/.tg_runnable_contrib
50577 Â 1% +187.9% 145609 Â 0% TOTAL sched_debug.cfs_rq[43]:/.exec_clock
249630 Â15% -65.3% 86741 Â 5% TOTAL sched_debug.cpu#8.avg_idle
332 Â 5% +194.9% 980 Â 0% TOTAL sched_debug.cfs_rq[52]:/.tg_runnable_contrib
235932 Â 9% -62.3% 88954 Â19% TOTAL sched_debug.cpu#34.avg_idle
50890 Â 2% +185.7% 145384 Â 0% TOTAL sched_debug.cfs_rq[47]:/.exec_clock
50930 Â 1% +185.7% 145517 Â 0% TOTAL sched_debug.cfs_rq[40]:/.exec_clock
14328 Â16% +214.3% 45037 Â 0% TOTAL sched_debug.cfs_rq[53]:/.avg->runnable_avg_sum
16062 Â 9% +180.4% 45031 Â 0% TOTAL sched_debug.cfs_rq[47]:/.avg->runnable_avg_sum
51840 Â 1% +183.3% 146858 Â 0% TOTAL sched_debug.cfs_rq[41]:/.exec_clock
1499 Â20% +187.0% 4301 Â 1% TOTAL sched_debug.cpu#51.curr->pid
50662 Â 2% +187.3% 145570 Â 0% TOTAL sched_debug.cfs_rq[44]:/.exec_clock
269032 Â 7% -66.8% 89230 Â17% TOTAL sched_debug.cpu#59.avg_idle
313 Â16% +212.5% 980 Â 0% TOTAL sched_debug.cfs_rq[53]:/.tg_runnable_contrib
16303 Â17% +176.2% 45035 Â 0% TOTAL sched_debug.cfs_rq[54]:/.avg->runnable_avg_sum
356 Â17% +174.9% 979 Â 0% TOTAL sched_debug.cfs_rq[54]:/.tg_runnable_contrib
351 Â 9% +178.0% 977 Â 0% TOTAL sched_debug.cfs_rq[47]:/.tg_runnable_contrib
15587 Â11% +189.2% 45070 Â 0% TOTAL sched_debug.cfs_rq[43]:/.avg->runnable_avg_sum
341 Â11% +187.8% 982 Â 0% TOTAL sched_debug.cfs_rq[43]:/.tg_runnable_contrib
99841369 Â 0% +180.1% 2.796e+08 Â 1% TOTAL numa-numastat.node2.local_node
99843858 Â 0% +180.1% 2.796e+08 Â 1% TOTAL numa-numastat.node2.numa_hit
10 Â17% +196.2% 30 Â 3% TOTAL sched_debug.cpu#57.cpu_load[4]
11 Â12% +189.1% 31 Â 8% TOTAL sched_debug.cpu#41.cpu_load[4]
1560 Â11% +175.1% 4292 Â 1% TOTAL sched_debug.cpu#59.curr->pid
49642548 Â 0% +177.8% 1.379e+08 Â 1% TOTAL numa-vmstat.node2.numa_local
8 Â28% +220.5% 28 Â 4% TOTAL sched_debug.cpu#43.cpu_load[0]
49698361 Â 0% +177.6% 1.38e+08 Â 1% TOTAL numa-vmstat.node2.numa_hit
219871 Â18% -57.9% 92485 Â15% TOTAL sched_debug.cpu#62.avg_idle
50123901 Â 0% +178.0% 1.393e+08 Â 1% TOTAL numa-vmstat.node0.numa_local
50127863 Â 0% +177.9% 1.393e+08 Â 1% TOTAL numa-vmstat.node0.numa_hit
16250 Â 5% +178.5% 45253 Â 0% TOTAL sched_debug.cfs_rq[42]:/.avg->runnable_avg_sum
49861680 Â 0% +176.2% 1.377e+08 Â 1% TOTAL numa-vmstat.node3.numa_local
355 Â 5% +177.0% 985 Â 0% TOTAL sched_debug.cfs_rq[42]:/.tg_runnable_contrib
49917445 Â 0% +176.0% 1.378e+08 Â 1% TOTAL numa-vmstat.node3.numa_hit
1564 Â12% +175.2% 4304 Â 2% TOTAL sched_debug.cpu#54.curr->pid
49911338 Â 0% +175.9% 1.377e+08 Â 1% TOTAL numa-vmstat.node1.numa_local
248524 Â15% -58.0% 104482 Â29% TOTAL sched_debug.cpu#46.avg_idle
49966794 Â 0% +175.7% 1.377e+08 Â 1% TOTAL numa-vmstat.node1.numa_hit
1429 Â20% +198.6% 4269 Â 0% TOTAL sched_debug.cpu#44.curr->pid
16484 Â 8% +175.8% 45465 Â 0% TOTAL sched_debug.cfs_rq[41]:/.avg->runnable_avg_sum
361 Â 8% +174.8% 992 Â 0% TOTAL sched_debug.cfs_rq[41]:/.tg_runnable_contrib
1453 Â23% +197.1% 4318 Â 1% TOTAL sched_debug.cpu#57.curr->pid
1424 Â28% +213.8% 4470 Â 5% TOTAL sched_debug.cpu#63.curr->pid
1420 Â23% +201.2% 4278 Â 3% TOTAL sched_debug.cpu#58.curr->pid
211265 Â21% -57.9% 88934 Â31% TOTAL sched_debug.cpu#39.avg_idle
37 Â32% +129.8% 86 Â10% TOTAL sched_debug.cfs_rq[3]:/.tg_load_contrib
2788857 Â 1% +163.2% 7340919 Â 1% TOTAL sched_debug.cfs_rq[16]:/.min_vruntime
224622 Â17% -59.3% 91521 Â15% TOTAL sched_debug.cpu#35.avg_idle
2764205 Â 1% +161.1% 7216910 Â 1% TOTAL sched_debug.cfs_rq[8]:/.min_vruntime
2819278 Â 1% +156.7% 7238266 Â 1% TOTAL sched_debug.cfs_rq[10]:/.min_vruntime
2837162 Â 1% +159.3% 7357411 Â 1% TOTAL sched_debug.cfs_rq[24]:/.min_vruntime
2817770 Â 1% +160.8% 7349390 Â 1% TOTAL sched_debug.cfs_rq[17]:/.min_vruntime
12 Â31% +160.0% 31 Â 4% TOTAL sched_debug.cpu#33.cpu_load[3]
2839736 Â 1% +154.5% 7225910 Â 1% TOTAL sched_debug.cfs_rq[13]:/.min_vruntime
37 Â38% +86.6% 69 Â45% TOTAL sched_debug.cfs_rq[27]:/.tg_load_contrib
1826 Â15% +133.4% 4262 Â 0% TOTAL sched_debug.cpu#36.curr->pid
2880692 Â 3% +154.0% 7316909 Â 1% TOTAL sched_debug.cfs_rq[0]:/.min_vruntime
1543 Â16% +175.8% 4257 Â 0% TOTAL sched_debug.cpu#46.curr->pid
2903723 Â 1% +153.0% 7346423 Â 1% TOTAL sched_debug.cfs_rq[20]:/.min_vruntime
2853451 Â 1% +156.8% 7328372 Â 1% TOTAL sched_debug.cfs_rq[18]:/.min_vruntime
2881811 Â 2% +154.8% 7342530 Â 0% TOTAL sched_debug.cfs_rq[25]:/.min_vruntime
3.22 Â 1% +149.8% 8.05 Â 3% TOTAL perf-profile.cpu-cycles.copy_user_generic_string.sctp_user_addto_chunk.sctp_datamsg_from_user.sctp_sendmsg.inet_sendmsg
2848635 Â 1% +153.7% 7225776 Â 1% TOTAL sched_debug.cfs_rq[12]:/.min_vruntime
2878875 Â 2% +155.2% 7346322 Â 1% TOTAL sched_debug.cfs_rq[2]:/.min_vruntime
222782 Â27% -61.7% 85268 Â 9% TOTAL sched_debug.cpu#56.avg_idle
2831503 Â 1% +154.5% 7205461 Â 1% TOTAL sched_debug.cfs_rq[11]:/.min_vruntime
2835663 Â 1% +154.6% 7218527 Â 1% TOTAL sched_debug.cfs_rq[9]:/.min_vruntime
1704 Â26% +152.9% 4309 Â 1% TOTAL sched_debug.cpu#41.curr->pid
12 Â35% +130.6% 28 Â 2% TOTAL sched_debug.cfs_rq[48]:/.load
2844115 Â 0% +154.4% 7235577 Â 1% TOTAL sched_debug.cfs_rq[15]:/.min_vruntime
1665 Â28% +159.1% 4315 Â 1% TOTAL sched_debug.cpu#33.curr->pid
2892891 Â 1% +154.2% 7353510 Â 1% TOTAL sched_debug.cfs_rq[19]:/.min_vruntime
2901579 Â 2% +152.7% 7332677 Â 1% TOTAL sched_debug.cfs_rq[22]:/.min_vruntime
2896475 Â 2% +154.3% 7366784 Â 0% TOTAL sched_debug.cfs_rq[23]:/.min_vruntime
2911824 Â 1% +151.7% 7327805 Â 1% TOTAL sched_debug.cfs_rq[1]:/.min_vruntime
2966470 Â 1% +147.9% 7354082 Â 1% TOTAL sched_debug.cfs_rq[29]:/.min_vruntime
2884101 Â 0% +150.3% 7219627 Â 1% TOTAL sched_debug.cfs_rq[14]:/.min_vruntime
2925842 Â 1% +151.9% 7369360 Â 1% TOTAL sched_debug.cfs_rq[31]:/.min_vruntime
2902721 Â 1% +152.7% 7334172 Â 1% TOTAL sched_debug.cfs_rq[21]:/.min_vruntime
2924791 Â 2% +150.8% 7336302 Â 1% TOTAL sched_debug.cfs_rq[26]:/.min_vruntime
11 Â26% +154.2% 30 Â 2% TOTAL sched_debug.cpu#57.cpu_load[3]
2910713 Â 1% +151.6% 7322791 Â 1% TOTAL sched_debug.cfs_rq[3]:/.min_vruntime
2952231 Â 2% +149.6% 7369935 Â 0% TOTAL sched_debug.cfs_rq[27]:/.min_vruntime
1327 Â42% +219.3% 4239 Â 0% TOTAL sched_debug.cpu#40.curr->pid
2975600 Â 0% +146.9% 7348061 Â 1% TOTAL sched_debug.cfs_rq[28]:/.min_vruntime
2927020 Â 2% +150.3% 7326407 Â 1% TOTAL sched_debug.cfs_rq[5]:/.min_vruntime
2937147 Â 0% +148.1% 7287431 Â 1% TOTAL sched_debug.cfs_rq[7]:/.min_vruntime
12 Â16% +151.6% 32 Â12% TOTAL sched_debug.cpu#49.cpu_load[0]
2972203 Â 1% +146.7% 7331240 Â 0% TOTAL sched_debug.cfs_rq[30]:/.min_vruntime
2917430 Â 1% +149.9% 7291729 Â 1% TOTAL sched_debug.cfs_rq[4]:/.min_vruntime
9 Â19% +177.6% 27 Â 4% TOTAL sched_debug.cfs_rq[60]:/.runnable_load_avg
2903992 Â 2% +150.8% 7282050 Â 1% TOTAL sched_debug.cfs_rq[6]:/.min_vruntime
37 Â31% +113.3% 80 Â23% TOTAL sched_debug.cfs_rq[29]:/.tg_load_contrib
230273 Â17% -51.7% 111201 Â32% TOTAL sched_debug.cpu#50.avg_idle
0.70 Â 1% +140.3% 1.68 Â 3% TOTAL perf-profile.cpu-cycles.get_page_from_freelist.__alloc_pages_nodemask.kmalloc_large_node.__kmalloc_node_track_caller.__kmalloc_reserve
206691 Â15% -57.8% 87201 Â 8% TOTAL sched_debug.cpu#61.avg_idle
259396 Â 5% +122.7% 577766 Â 1% TOTAL sched_debug.cpu#62.ttwu_count
12 Â34% +134.9% 29 Â 3% TOTAL sched_debug.cpu#57.cpu_load[2]
269666 Â16% -57.7% 114191 Â17% TOTAL sched_debug.cpu#36.avg_idle
14 Â36% +119.7% 31 Â 5% TOTAL sched_debug.cpu#33.cpu_load[2]
1706 Â16% +149.8% 4262 Â 0% TOTAL sched_debug.cpu#38.curr->pid
242840 Â 4% -54.8% 109685 Â27% TOTAL sched_debug.cpu#45.avg_idle
262430 Â 5% +120.6% 578956 Â 1% TOTAL sched_debug.cpu#60.ttwu_count
139692 Â 4% -54.6% 63480 Â 2% TOTAL proc-vmstat.numa_hint_faults
12 Â19% +148.4% 31 Â 9% TOTAL sched_debug.cpu#41.cpu_load[3]
264847 Â 9% +116.3% 572756 Â 2% TOTAL sched_debug.cpu#61.ttwu_count
0.50 Â 7% +122.0% 1.11 Â12% TOTAL perf-profile.cpu-cycles.sctp_sendmsg.inet_sendmsg.sock_sendmsg.___sys_sendmsg.__sys_sendmsg
30 Â12% +115.3% 64 Â46% TOTAL sched_debug.cfs_rq[19]:/.tg_load_contrib
11 Â28% +139.0% 28 Â 5% TOTAL sched_debug.cfs_rq[42]:/.runnable_load_avg
263882 Â 5% +119.9% 580288 Â 1% TOTAL sched_debug.cpu#59.ttwu_count
15 Â41% +105.3% 30 Â 7% TOTAL sched_debug.cpu#33.cpu_load[1]
15 Â41% +97.5% 31 Â10% TOTAL sched_debug.cpu#33.cpu_load[0]
232649 Â 8% -53.9% 107165 Â 9% TOTAL sched_debug.cpu#37.avg_idle
1870 Â19% +126.4% 4234 Â 0% TOTAL sched_debug.cpu#42.curr->pid
276936 Â 2% +110.2% 582259 Â 1% TOTAL sched_debug.cpu#32.ttwu_count
32 Â13% +116.7% 70 Â31% TOTAL sched_debug.cfs_rq[24]:/.tg_load_contrib
3.48 Â 0% +111.1% 7.35 Â 1% TOTAL perf-profile.cpu-cycles.copy_user_generic_string.skb_copy_datagram_iovec.sctp_recvmsg.sock_common_recvmsg.sock_recvmsg
276884 Â 2% +111.1% 584406 Â 1% TOTAL sched_debug.cpu#33.ttwu_count
278248 Â 4% +107.0% 575968 Â 2% TOTAL sched_debug.cpu#55.ttwu_count
276813 Â 5% +109.4% 579782 Â 1% TOTAL sched_debug.cpu#51.ttwu_count
274908 Â 2% +110.4% 578527 Â 1% TOTAL sched_debug.cpu#52.ttwu_count
276883 Â 5% +108.8% 578248 Â 1% TOTAL sched_debug.cpu#53.ttwu_count
13 Â48% +127.3% 30 Â 8% TOTAL sched_debug.cpu#57.cpu_load[0]
12 Â42% +130.2% 29 Â 3% TOTAL sched_debug.cpu#57.cpu_load[1]
12 Â47% +159.0% 31 Â13% TOTAL sched_debug.cfs_rq[41]:/.runnable_load_avg
96427 Â 3% -51.4% 46839 Â 0% TOTAL proc-vmstat.numa_hint_faults_local
14 Â23% +113.5% 31 Â11% TOTAL sched_debug.cpu#41.cpu_load[2]
281119 Â 3% +105.5% 577638 Â 1% TOTAL sched_debug.cpu#63.ttwu_count
279804 Â 7% +105.6% 575372 Â 1% TOTAL sched_debug.cpu#54.ttwu_count
285799 Â 3% +104.0% 583075 Â 1% TOTAL sched_debug.cpu#39.ttwu_count
291642 Â21% -55.2% 130660 Â18% TOTAL sched_debug.cpu#40.avg_idle
140 Â31% -57.1% 60 Â44% TOTAL sched_debug.cfs_rq[43]:/.tg_load_contrib
13879 Â 0% +104.3% 28362 Â 0% TOTAL proc-vmstat.pgactivate
251355 Â27% -49.9% 126023 Â33% TOTAL sched_debug.cpu#42.avg_idle
284829 Â 5% +104.2% 581719 Â 1% TOTAL sched_debug.cpu#36.ttwu_count
287754 Â 5% +101.5% 579892 Â 1% TOTAL sched_debug.cpu#57.ttwu_count
290676 Â 4% +98.4% 576782 Â 1% TOTAL sched_debug.cpu#50.ttwu_count
15 Â34% +92.4% 30 Â11% TOTAL sched_debug.cfs_rq[33]:/.runnable_load_avg
28 Â12% +100.7% 57 Â30% TOTAL sched_debug.cfs_rq[31]:/.tg_load_contrib
287909 Â 4% +101.5% 580220 Â 0% TOTAL sched_debug.cpu#56.ttwu_count
281438 Â 6% +105.0% 576839 Â 1% TOTAL sched_debug.cpu#58.ttwu_count
223246 Â22% -59.3% 90847 Â16% TOTAL sched_debug.cpu#57.avg_idle
292834 Â 2% +98.9% 582416 Â 1% TOTAL sched_debug.cpu#35.ttwu_count
299007 Â 1% +93.8% 579492 Â 1% TOTAL sched_debug.cpu#49.ttwu_count
296051 Â 3% +96.4% 581581 Â 0% TOTAL sched_debug.cpu#37.ttwu_count
155938 Â 3% -48.9% 79614 Â 2% TOTAL proc-vmstat.numa_pte_updates
292818 Â 4% +99.2% 583153 Â 1% TOTAL sched_debug.cpu#34.ttwu_count
291339 Â 3% +97.8% 576203 Â 1% TOTAL sched_debug.cpu#46.ttwu_count
1173228 Â19% -48.8% 601063 Â 1% TOTAL sched_debug.cpu#8.sched_count
291043 Â 6% +98.2% 576756 Â 1% TOTAL sched_debug.cpu#48.ttwu_count
32644 Â 0% +92.8% 62936 Â 0% TOTAL sched_debug.cfs_rq[0]:/.tg->runnable_avg
32648 Â 0% +92.8% 62937 Â 0% TOTAL sched_debug.cfs_rq[1]:/.tg->runnable_avg
32650 Â 0% +92.8% 62937 Â 0% TOTAL sched_debug.cfs_rq[2]:/.tg->runnable_avg
32654 Â 0% +92.7% 62937 Â 0% TOTAL sched_debug.cfs_rq[3]:/.tg->runnable_avg
32665 Â 0% +92.7% 62937 Â 0% TOTAL sched_debug.cfs_rq[4]:/.tg->runnable_avg
32674 Â 0% +92.6% 62938 Â 0% TOTAL sched_debug.cfs_rq[5]:/.tg->runnable_avg
32689 Â 0% +92.5% 62938 Â 0% TOTAL sched_debug.cfs_rq[8]:/.tg->runnable_avg
32678 Â 0% +92.6% 62937 Â 0% TOTAL sched_debug.cfs_rq[6]:/.tg->runnable_avg
32690 Â 0% +92.5% 62938 Â 0% TOTAL sched_debug.cfs_rq[10]:/.tg->runnable_avg
32691 Â 0% +92.5% 62938 Â 0% TOTAL sched_debug.cfs_rq[9]:/.tg->runnable_avg
32686 Â 0% +92.6% 62937 Â 0% TOTAL sched_debug.cfs_rq[7]:/.tg->runnable_avg
32692 Â 0% +92.5% 62938 Â 0% TOTAL sched_debug.cfs_rq[11]:/.tg->runnable_avg
32696 Â 0% +92.5% 62938 Â 0% TOTAL sched_debug.cfs_rq[12]:/.tg->runnable_avg
32701 Â 0% +92.5% 62938 Â 0% TOTAL sched_debug.cfs_rq[13]:/.tg->runnable_avg
32704 Â 0% +92.4% 62938 Â 0% TOTAL sched_debug.cfs_rq[14]:/.tg->runnable_avg
32702 Â 0% +92.5% 62938 Â 0% TOTAL sched_debug.cfs_rq[15]:/.tg->runnable_avg
32717 Â 0% +92.4% 62937 Â 0% TOTAL sched_debug.cfs_rq[18]:/.tg->runnable_avg
32707 Â 0% +92.4% 62938 Â 0% TOTAL sched_debug.cfs_rq[16]:/.tg->runnable_avg
32713 Â 0% +92.4% 62937 Â 0% TOTAL sched_debug.cfs_rq[17]:/.tg->runnable_avg
32722 Â 0% +92.3% 62938 Â 0% TOTAL sched_debug.cfs_rq[19]:/.tg->runnable_avg
32727 Â 0% +92.3% 62938 Â 0% TOTAL sched_debug.cfs_rq[20]:/.tg->runnable_avg
32732 Â 0% +92.3% 62938 Â 0% TOTAL sched_debug.cfs_rq[21]:/.tg->runnable_avg
32740 Â 0% +92.2% 62938 Â 0% TOTAL sched_debug.cfs_rq[24]:/.tg->runnable_avg
32739 Â 0% +92.2% 62938 Â 0% TOTAL sched_debug.cfs_rq[23]:/.tg->runnable_avg
32743 Â 0% +92.2% 62938 Â 0% TOTAL sched_debug.cfs_rq[26]:/.tg->runnable_avg
32739 Â 0% +92.2% 62938 Â 0% TOTAL sched_debug.cfs_rq[22]:/.tg->runnable_avg
32743 Â 0% +92.2% 62937 Â 0% TOTAL sched_debug.cfs_rq[27]:/.tg->runnable_avg
32746 Â 0% +92.2% 62938 Â 0% TOTAL sched_debug.cfs_rq[25]:/.tg->runnable_avg
32751 Â 0% +92.2% 62938 Â 0% TOTAL sched_debug.cfs_rq[29]:/.tg->runnable_avg
32751 Â 0% +92.2% 62938 Â 0% TOTAL sched_debug.cfs_rq[30]:/.tg->runnable_avg
32747 Â 0% +92.2% 62938 Â 0% TOTAL sched_debug.cfs_rq[28]:/.tg->runnable_avg
32752 Â 0% +92.2% 62939 Â 0% TOTAL sched_debug.cfs_rq[31]:/.tg->runnable_avg
32752 Â 0% +92.2% 62939 Â 0% TOTAL sched_debug.cfs_rq[32]:/.tg->runnable_avg
32759 Â 0% +92.1% 62939 Â 0% TOTAL sched_debug.cfs_rq[33]:/.tg->runnable_avg
32770 Â 0% +92.1% 62939 Â 0% TOTAL sched_debug.cfs_rq[37]:/.tg->runnable_avg
32757 Â 0% +92.1% 62939 Â 0% TOTAL sched_debug.cfs_rq[34]:/.tg->runnable_avg
32766 Â 0% +92.1% 62939 Â 0% TOTAL sched_debug.cfs_rq[36]:/.tg->runnable_avg
32765 Â 0% +92.1% 62939 Â 0% TOTAL sched_debug.cfs_rq[35]:/.tg->runnable_avg
32767 Â 0% +92.1% 62939 Â 0% TOTAL sched_debug.cfs_rq[38]:/.tg->runnable_avg
32774 Â 0% +92.0% 62939 Â 0% TOTAL sched_debug.cfs_rq[41]:/.tg->runnable_avg
32775 Â 0% +92.0% 62939 Â 0% TOTAL sched_debug.cfs_rq[42]:/.tg->runnable_avg
32774 Â 0% +92.0% 62939 Â 0% TOTAL sched_debug.cfs_rq[39]:/.tg->runnable_avg
32779 Â 0% +92.0% 62939 Â 0% TOTAL sched_debug.cfs_rq[43]:/.tg->runnable_avg
32773 Â 0% +92.0% 62939 Â 0% TOTAL sched_debug.cfs_rq[40]:/.tg->runnable_avg
32822 Â 0% +91.8% 62940 Â 0% TOTAL sched_debug.cfs_rq[63]:/.tg->runnable_avg
32820 Â 0% +91.8% 62940 Â 0% TOTAL sched_debug.cfs_rq[62]:/.tg->runnable_avg
32793 Â 0% +91.9% 62939 Â 0% TOTAL sched_debug.cfs_rq[47]:/.tg->runnable_avg
32796 Â 0% +91.9% 62940 Â 0% TOTAL sched_debug.cfs_rq[48]:/.tg->runnable_avg
32818 Â 0% +91.8% 62940 Â 0% TOTAL sched_debug.cfs_rq[61]:/.tg->runnable_avg
32821 Â 0% +91.8% 62940 Â 0% TOTAL sched_debug.cfs_rq[60]:/.tg->runnable_avg
32790 Â 0% +91.9% 62939 Â 0% TOTAL sched_debug.cfs_rq[44]:/.tg->runnable_avg
32791 Â 0% +91.9% 62939 Â 0% TOTAL sched_debug.cfs_rq[46]:/.tg->runnable_avg
32819 Â 0% +91.8% 62941 Â 0% TOTAL sched_debug.cfs_rq[59]:/.tg->runnable_avg
32793 Â 0% +91.9% 62939 Â 0% TOTAL sched_debug.cfs_rq[45]:/.tg->runnable_avg
32812 Â 0% +91.8% 62940 Â 0% TOTAL sched_debug.cfs_rq[56]:/.tg->runnable_avg
32817 Â 0% +91.8% 62940 Â 0% TOTAL sched_debug.cfs_rq[57]:/.tg->runnable_avg
32809 Â 0% +91.8% 62940 Â 0% TOTAL sched_debug.cfs_rq[52]:/.tg->runnable_avg
32811 Â 0% +91.8% 62940 Â 0% TOTAL sched_debug.cfs_rq[55]:/.tg->runnable_avg
32801 Â 0% +91.9% 62940 Â 0% TOTAL sched_debug.cfs_rq[49]:/.tg->runnable_avg
32822 Â 0% +91.8% 62941 Â 0% TOTAL sched_debug.cfs_rq[58]:/.tg->runnable_avg
32808 Â 0% +91.8% 62940 Â 0% TOTAL sched_debug.cfs_rq[54]:/.tg->runnable_avg
32812 Â 0% +91.8% 62940 Â 0% TOTAL sched_debug.cfs_rq[53]:/.tg->runnable_avg
32809 Â 0% +91.8% 62940 Â 0% TOTAL sched_debug.cfs_rq[51]:/.tg->runnable_avg
32804 Â 0% +91.9% 62940 Â 0% TOTAL sched_debug.cfs_rq[50]:/.tg->runnable_avg
309179 Â 2% +87.8% 580793 Â 1% TOTAL sched_debug.cpu#41.ttwu_count
81 Â44% -62.2% 30 Â 4% TOTAL sched_debug.cfs_rq[62]:/.tg_load_contrib
5254994 Â 0% +89.6% 9963624 Â 0% TOTAL softirqs.TIMER
3.69 Â 1% +86.0% 6.87 Â 4% TOTAL perf-profile.cpu-cycles.memcpy.sctp_outq_flush.sctp_outq_uncork.sctp_cmd_interpreter.sctp_do_sm
308199 Â 4% +86.7% 575358 Â 1% TOTAL sched_debug.cpu#42.ttwu_count
305544 Â 4% +90.6% 582390 Â 1% TOTAL sched_debug.cpu#38.ttwu_count
309614 Â 1% +86.5% 577365 Â 1% TOTAL sched_debug.cpu#43.ttwu_count
14 Â33% +98.6% 28 Â 2% TOTAL sched_debug.cpu#35.load
309719 Â 5% +85.8% 575506 Â 1% TOTAL sched_debug.cpu#45.ttwu_count
1081161 Â 1% -46.1% 582374 Â 1% TOTAL sched_debug.cpu#29.ttwu_count
316152 Â 3% +82.7% 577646 Â 1% TOTAL sched_debug.cpu#40.ttwu_count
27 Â 8% +107.9% 57 Â49% TOTAL sched_debug.cfs_rq[9]:/.tg_load_contrib
1083438 Â 0% -45.9% 586638 Â 0% TOTAL sched_debug.cpu#7.ttwu_count
315864 Â 5% +82.6% 576755 Â 1% TOTAL sched_debug.cpu#47.ttwu_count
31 Â 7% +69.2% 52 Â36% TOTAL sched_debug.cfs_rq[13]:/.tg_load_contrib
1084154 Â 1% -45.7% 588437 Â 1% TOTAL sched_debug.cpu#30.ttwu_count
311962 Â 5% +84.9% 576905 Â 1% TOTAL sched_debug.cpu#44.ttwu_count
1060523 Â 1% -45.1% 582533 Â 1% TOTAL sched_debug.cpu#21.ttwu_count
1054699 Â 1% -45.1% 579000 Â 1% TOTAL sched_debug.cpu#14.ttwu_count
17 Â22% +70.8% 30 Â 2% TOTAL sched_debug.cpu#8.cpu_load[4]
15 Â26% +103.9% 31 Â12% TOTAL sched_debug.cpu#41.cpu_load[1]
28 Â 4% +85.4% 53 Â38% TOTAL sched_debug.cfs_rq[30]:/.tg_load_contrib
1075268 Â 1% -45.5% 586405 Â 1% TOTAL sched_debug.cpu#28.ttwu_count
1059908 Â 2% -44.9% 584032 Â 1% TOTAL sched_debug.cpu#31.ttwu_count
1068780 Â 1% -45.5% 582915 Â 1% TOTAL sched_debug.cpu#27.ttwu_count
1068310 Â 0% -44.9% 588354 Â 1% TOTAL sched_debug.cpu#5.ttwu_count
1055705 Â 2% -45.0% 580123 Â 2% TOTAL sched_debug.cpu#23.ttwu_count
1054570 Â 1% -44.7% 583394 Â 1% TOTAL sched_debug.cpu#20.ttwu_count
254366 Â43% -59.3% 103437 Â13% TOTAL sched_debug.cpu#54.avg_idle
1063589 Â 2% -44.7% 587718 Â 1% TOTAL sched_debug.cpu#6.ttwu_count
1058139 Â 3% -44.6% 585929 Â 1% TOTAL sched_debug.cpu#22.ttwu_count
1067145 Â 1% -44.9% 588220 Â 1% TOTAL sched_debug.cpu#4.ttwu_count
1057669 Â 0% -44.3% 588982 Â 1% TOTAL sched_debug.cpu#1.ttwu_count
1059310 Â 1% -44.3% 589806 Â 1% TOTAL sched_debug.cpu#3.ttwu_count
1043843 Â 1% -44.2% 581948 Â 1% TOTAL sched_debug.cpu#18.ttwu_count
1031973 Â 1% -43.9% 579237 Â 1% TOTAL sched_debug.cpu#11.ttwu_count
18 Â24% +70.7% 31 Â13% TOTAL sched_debug.cpu#41.cpu_load[0]
1047788 Â 2% -44.3% 583196 Â 1% TOTAL sched_debug.cpu#19.ttwu_count
1044163 Â 2% -44.2% 583017 Â 1% TOTAL sched_debug.cpu#26.ttwu_count
1288766 Â38% -52.8% 608462 Â 1% TOTAL sched_debug.cpu#1.sched_count
1032898 Â 1% -43.8% 579984 Â 1% TOTAL sched_debug.cpu#13.ttwu_count
1049631 Â 1% -43.7% 590788 Â 0% TOTAL sched_debug.cpu#2.ttwu_count
1034514 Â 1% -43.7% 582699 Â 1% TOTAL sched_debug.cpu#15.ttwu_count
16 Â20% +75.0% 29 Â 4% TOTAL sched_debug.cpu#9.cpu_load[0]
1023516 Â 1% -43.1% 582541 Â 1% TOTAL sched_debug.cpu#17.ttwu_count
1039799 Â 3% -44.3% 579547 Â 2% TOTAL sched_debug.cpu#25.ttwu_count
1302779 Â36% -54.0% 598642 Â 1% TOTAL sched_debug.cpu#9.sched_count
1012139 Â 0% -42.5% 581867 Â 1% TOTAL sched_debug.cpu#9.ttwu_count
1032088 Â 2% -43.7% 581367 Â 1% TOTAL sched_debug.cpu#12.ttwu_count
1015497 Â 1% -42.6% 582939 Â 1% TOTAL sched_debug.cpu#16.ttwu_count
1023139 Â 1% -43.1% 582533 Â 1% TOTAL sched_debug.cpu#10.ttwu_count
1039034 Â 1% -42.7% 595606 Â 1% TOTAL sched_debug.cpu#0.ttwu_count
1125131 Â22% -46.7% 600109 Â 2% TOTAL sched_debug.cpu#12.sched_count
9019 Â 6% -42.0% 5232 Â10% TOTAL proc-vmstat.pgmigrate_success
9019 Â 6% -42.0% 5232 Â10% TOTAL proc-vmstat.numa_pages_migrated
1020221 Â 2% -42.6% 586085 Â 1% TOTAL sched_debug.cpu#24.ttwu_count
18 Â 9% +69.2% 30 Â 2% TOTAL sched_debug.cpu#9.cpu_load[4]
19 Â14% +84.8% 36 Â34% TOTAL sched_debug.cfs_rq[38]:/.load
30 Â12% +56.7% 47 Â18% TOTAL sched_debug.cfs_rq[25]:/.tg_load_contrib
17 Â13% +62.5% 28 Â 3% TOTAL sched_debug.cpu#1.cpu_load[0]
17 Â15% +64.4% 28 Â 1% TOTAL sched_debug.cpu#22.cpu_load[3]
19 Â16% +52.1% 29 Â 2% TOTAL sched_debug.cpu#25.cpu_load[3]
991690 Â 0% -41.2% 582981 Â 1% TOTAL sched_debug.cpu#8.ttwu_count
18 Â12% +60.2% 29 Â 4% TOTAL sched_debug.cpu#15.cpu_load[4]
18 Â22% +66.7% 30 Â 2% TOTAL sched_debug.cpu#8.cpu_load[3]
17 Â11% +72.4% 30 Â 2% TOTAL sched_debug.cpu#9.cpu_load[3]
21 Â26% +70.5% 35 Â38% TOTAL sched_debug.cpu#61.load
19 Â25% +56.8% 29 Â 2% TOTAL sched_debug.cpu#43.load
18 Â16% +52.7% 27 Â 1% TOTAL sched_debug.cpu#30.cpu_load[2]
18 Â18% +56.0% 28 Â 3% TOTAL sched_debug.cpu#26.cpu_load[4]
17 Â17% +59.1% 28 Â 3% TOTAL sched_debug.cpu#22.cpu_load[1]
17 Â 4% +58.0% 27 Â 4% TOTAL sched_debug.cpu#3.cpu_load[0]
17 Â16% +55.1% 27 Â 3% TOTAL sched_debug.cpu#22.cpu_load[0]
18 Â19% +56.7% 28 Â 4% TOTAL sched_debug.cpu#26.cpu_load[3]
17 Â17% +64.0% 28 Â 2% TOTAL sched_debug.cpu#22.cpu_load[2]
2664 Â15% +63.7% 4362 Â 1% TOTAL sched_debug.cpu#20.curr->pid
19 Â14% +56.6% 31 Â 3% TOTAL sched_debug.cfs_rq[45]:/.load
71 Â43% -52.1% 34 Â13% TOTAL sched_debug.cfs_rq[47]:/.tg_load_contrib
13306 Â 0% +61.5% 21495 Â 0% TOTAL proc-vmstat.nr_shmem
53249 Â 0% +61.5% 85996 Â 0% TOTAL meminfo.Shmem
18 Â 8% +58.7% 29 Â 1% TOTAL sched_debug.cpu#11.cpu_load[3]
18 Â12% +57.1% 28 Â 1% TOTAL sched_debug.cpu#30.cpu_load[3]
17 Â22% +65.2% 29 Â 3% TOTAL sched_debug.cfs_rq[9]:/.runnable_load_avg
19 Â 6% +54.7% 29 Â 1% TOTAL sched_debug.cpu#23.cpu_load[4]
17 Â15% +69.8% 29 Â 3% TOTAL sched_debug.cpu#9.cpu_load[2]
18 Â 7% +59.1% 29 Â 2% TOTAL sched_debug.cpu#11.cpu_load[4]
17 Â17% +70.6% 29 Â 4% TOTAL sched_debug.cpu#9.cpu_load[1]
18 Â 8% +55.4% 28 Â 3% TOTAL sched_debug.cpu#1.cpu_load[1]
18 Â 7% +55.3% 29 Â 3% TOTAL sched_debug.cpu#1.cpu_load[2]
19 Â14% +54.7% 29 Â 1% TOTAL sched_debug.cpu#25.cpu_load[4]
17 Â14% +65.5% 28 Â 1% TOTAL sched_debug.cpu#22.cpu_load[4]
18 Â 9% +53.8% 28 Â 4% TOTAL sched_debug.cfs_rq[13]:/.runnable_load_avg
19 Â11% +54.7% 29 Â 5% TOTAL sched_debug.cpu#15.cpu_load[3]
18 Â12% +55.4% 28 Â 1% TOTAL sched_debug.cfs_rq[23]:/.runnable_load_avg
18 Â 2% +57.1% 28 Â 3% TOTAL sched_debug.cpu#3.cpu_load[3]
18 Â 2% +60.4% 29 Â 2% TOTAL sched_debug.cpu#3.cpu_load[4]
17 Â 2% +60.7% 28 Â 3% TOTAL sched_debug.cpu#3.cpu_load[2]
2777 Â10% +60.7% 4463 Â 7% TOTAL sched_debug.cpu#27.curr->pid
2744 Â13% +58.3% 4342 Â 3% TOTAL sched_debug.cpu#22.curr->pid
988578 Â10% -38.9% 604501 Â 1% TOTAL sched_debug.cpu#24.sched_count
1083808 Â31% -42.7% 621044 Â 7% TOTAL sched_debug.cpu#6.sched_count
19 Â12% +53.1% 29 Â 4% TOTAL sched_debug.cpu#19.cpu_load[3]
19 Â13% +53.6% 29 Â 4% TOTAL sched_debug.cpu#19.cpu_load[4]
966177 Â24% -37.8% 600790 Â 2% TOTAL sched_debug.cpu#27.sched_count
1146030 Â46% -46.9% 608122 Â 1% TOTAL sched_debug.cpu#2.sched_count
18 Â 3% +55.6% 28 Â 3% TOTAL sched_debug.cpu#3.cpu_load[1]
19 Â19% +41.4% 28 Â 2% TOTAL sched_debug.cpu#5.cpu_load[0]
17 Â 9% +58.4% 28 Â 5% TOTAL sched_debug.cfs_rq[3]:/.runnable_load_avg
18 Â19% +52.2% 28 Â 5% TOTAL sched_debug.cpu#26.cpu_load[2]
18 Â17% +50.5% 28 Â 3% TOTAL sched_debug.cpu#30.cpu_load[0]
18 Â17% +50.5% 27 Â 2% TOTAL sched_debug.cpu#30.cpu_load[1]
19 Â11% +49.0% 28 Â 4% TOTAL sched_debug.cpu#15.cpu_load[2]
18 Â12% +56.7% 28 Â 1% TOTAL sched_debug.cpu#13.cpu_load[0]
20 Â13% +38.0% 27 Â 3% TOTAL sched_debug.cfs_rq[31]:/.runnable_load_avg
20 Â18% +40.0% 28 Â 2% TOTAL sched_debug.cpu#5.cpu_load[1]
18 Â22% +56.5% 28 Â 5% TOTAL sched_debug.cpu#8.cpu_load[0]
19 Â16% +49.0% 28 Â 2% TOTAL sched_debug.cpu#25.cpu_load[2]
1040164 Â27% -37.2% 653364 Â10% TOTAL sched_debug.cpu#19.sched_count
80 Â29% -45.9% 43 Â26% TOTAL sched_debug.cfs_rq[37]:/.tg_load_contrib
2953 Â10% +47.1% 4345 Â 2% TOTAL sched_debug.cpu#14.curr->pid
923395 Â 6% -24.6% 695896 Â17% TOTAL sched_debug.cpu#21.sched_count
2800 Â 5% +52.5% 4269 Â 1% TOTAL sched_debug.cpu#11.curr->pid
2746 Â15% +57.0% 4312 Â 2% TOTAL sched_debug.cpu#8.curr->pid
20 Â12% +44.1% 29 Â 1% TOTAL sched_debug.cpu#5.cpu_load[4]
18 Â13% +58.1% 29 Â 1% TOTAL sched_debug.cpu#13.cpu_load[2]
18 Â 7% +54.8% 28 Â 2% TOTAL sched_debug.cpu#11.cpu_load[1]
18 Â13% +56.5% 28 Â 1% TOTAL sched_debug.cpu#13.cpu_load[1]
18 Â22% +60.9% 29 Â 5% TOTAL sched_debug.cpu#8.cpu_load[2]
19 Â 6% +47.9% 28 Â 2% TOTAL sched_debug.cpu#7.cpu_load[2]
19 Â12% +51.6% 28 Â 2% TOTAL sched_debug.cpu#11.cpu_load[0]
18 Â12% +58.7% 29 Â 3% TOTAL sched_debug.cfs_rq[11]:/.runnable_load_avg
18 Â11% +62.6% 29 Â 5% TOTAL sched_debug.cfs_rq[1]:/.runnable_load_avg
19 Â 5% +50.5% 28 Â 1% TOTAL sched_debug.cpu#23.cpu_load[3]
19 Â21% +48.0% 29 Â 3% TOTAL sched_debug.cpu#27.cpu_load[4]
18 Â12% +57.0% 29 Â 2% TOTAL sched_debug.cpu#30.cpu_load[4]
18 Â 7% +53.2% 28 Â 2% TOTAL sched_debug.cpu#11.cpu_load[2]
19 Â 4% +58.9% 30 Â 9% TOTAL sched_debug.cfs_rq[2]:/.runnable_load_avg
1101084 Â30% -44.5% 611225 Â 2% TOTAL sched_debug.cpu#4.sched_count
2942 Â10% +44.9% 4263 Â 2% TOTAL sched_debug.cpu#7.curr->pid
2905 Â 7% +53.9% 4471 Â 2% TOTAL sched_debug.cpu#23.curr->pid
1035532 Â19% -41.9% 601372 Â 1% TOTAL sched_debug.cpu#15.sched_count
663 Â 7% +48.7% 987 Â 0% TOTAL sched_debug.cfs_rq[8]:/.tg_runnable_contrib
30382 Â 7% +49.0% 45275 Â 0% TOTAL sched_debug.cfs_rq[8]:/.avg->runnable_avg_sum
30719 Â 7% +46.5% 44998 Â 0% TOTAL sched_debug.cfs_rq[15]:/.avg->runnable_avg_sum
671 Â 7% +45.8% 979 Â 0% TOTAL sched_debug.cfs_rq[15]:/.tg_runnable_contrib
20 Â 9% +46.1% 29 Â 2% TOTAL sched_debug.cpu#18.cpu_load[4]
20 Â 6% +45.6% 30 Â 4% TOTAL sched_debug.cpu#1.cpu_load[4]
19 Â 9% +52.0% 29 Â 2% TOTAL sched_debug.cpu#14.cpu_load[4]
19 Â10% +57.3% 30 Â 2% TOTAL sched_debug.cpu#13.cpu_load[4]
19 Â12% +55.2% 29 Â 2% TOTAL sched_debug.cpu#13.cpu_load[3]
20 Â13% +35.6% 27 Â 1% TOTAL sched_debug.cfs_rq[22]:/.runnable_load_avg
19 Â11% +43.8% 27 Â 2% TOTAL sched_debug.cfs_rq[5]:/.runnable_load_avg
19 Â13% +42.7% 27 Â 3% TOTAL sched_debug.cpu#31.cpu_load[0]
1045 Â 1% +52.1% 1589 Â 7% TOTAL numa-vmstat.node0.nr_alloc_batch
737654 Â 1% +50.3% 1108371 Â 0% TOTAL softirqs.RCU
2824 Â15% +54.7% 4370 Â 1% TOTAL sched_debug.cpu#31.curr->pid
2900 Â 3% +48.4% 4303 Â 1% TOTAL sched_debug.cpu#17.curr->pid
1117 Â 2% +42.8% 1595 Â 6% TOTAL numa-vmstat.node3.nr_alloc_batch
31475 Â 6% +43.6% 45205 Â 0% TOTAL sched_debug.cfs_rq[26]:/.avg->runnable_avg_sum
2819 Â13% +51.7% 4276 Â 2% TOTAL sched_debug.cpu#26.curr->pid
31215 Â 4% +44.3% 45055 Â 0% TOTAL sched_debug.cfs_rq[30]:/.avg->runnable_avg_sum
30904 Â 4% +46.4% 45232 Â 0% TOTAL sched_debug.cfs_rq[10]:/.avg->runnable_avg_sum
675 Â 4% +45.9% 985 Â 0% TOTAL sched_debug.cfs_rq[10]:/.tg_runnable_contrib
687 Â 6% +42.7% 981 Â 0% TOTAL sched_debug.cfs_rq[26]:/.tg_runnable_contrib
19 Â 3% +47.9% 28 Â 3% TOTAL sched_debug.cpu#4.cpu_load[1]
21 Â18% +32.1% 28 Â 2% TOTAL sched_debug.cpu#31.cpu_load[2]
19 Â12% +44.9% 28 Â 5% TOTAL sched_debug.cpu#15.cpu_load[1]
18 Â21% +57.6% 29 Â 6% TOTAL sched_debug.cpu#8.cpu_load[1]
19 Â17% +47.4% 28 Â 3% TOTAL sched_debug.cpu#25.cpu_load[0]
19 Â 3% +49.0% 28 Â 6% TOTAL sched_debug.cpu#4.cpu_load[0]
20 Â13% +42.0% 28 Â 1% TOTAL sched_debug.cpu#5.cpu_load[3]
20 Â10% +41.0% 28 Â 4% TOTAL sched_debug.cfs_rq[15]:/.runnable_load_avg
19 Â 4% +45.9% 28 Â 2% TOTAL sched_debug.cpu#4.cpu_load[2]
19 Â 7% +47.4% 28 Â 2% TOTAL sched_debug.cpu#23.cpu_load[2]
18 Â 3% +47.9% 27 Â 3% TOTAL sched_debug.cpu#7.cpu_load[0]
19 Â13% +46.9% 28 Â 6% TOTAL sched_debug.cpu#15.cpu_load[0]
19 Â 8% +44.8% 27 Â 2% TOTAL sched_debug.cpu#23.cpu_load[1]
19 Â16% +45.4% 28 Â 1% TOTAL sched_debug.cpu#25.cpu_load[1]
19 Â11% +40.8% 27 Â 1% TOTAL sched_debug.cpu#23.cpu_load[0]
20 Â14% +39.6% 28 Â 1% TOTAL sched_debug.cpu#5.cpu_load[2]
19 Â14% +41.4% 28 Â 2% TOTAL sched_debug.cpu#31.cpu_load[1]
19 Â 6% +45.8% 28 Â 2% TOTAL sched_debug.cpu#7.cpu_load[1]
20 Â13% +40.6% 28 Â 3% TOTAL sched_debug.cfs_rq[25]:/.runnable_load_avg
2943 Â 8% +45.8% 4289 Â 0% TOTAL sched_debug.cpu#1.curr->pid
2703 Â11% +59.2% 4302 Â 1% TOTAL sched_debug.cpu#4.curr->pid
2989 Â13% +42.7% 4265 Â 2% TOTAL sched_debug.cpu#24.curr->pid
3048 Â 8% +42.7% 4349 Â 1% TOTAL sched_debug.cpu#3.curr->pid
30576 Â 6% +47.4% 45060 Â 0% TOTAL sched_debug.cfs_rq[22]:/.avg->runnable_avg_sum
682 Â 4% +43.6% 979 Â 0% TOTAL sched_debug.cfs_rq[30]:/.tg_runnable_contrib
1100 Â 3% +49.4% 1643 Â 4% TOTAL numa-vmstat.node1.nr_alloc_batch
4389 Â 1% +46.7% 6439 Â 1% TOTAL proc-vmstat.nr_alloc_batch
668 Â 6% +46.9% 982 Â 0% TOTAL sched_debug.cfs_rq[22]:/.tg_runnable_contrib
3115 Â 9% +38.4% 4310 Â 1% TOTAL sched_debug.cpu#5.curr->pid
695 Â 4% +41.8% 985 Â 0% TOTAL sched_debug.cfs_rq[18]:/.tg_runnable_contrib
31812 Â 4% +42.5% 45324 Â 0% TOTAL sched_debug.cfs_rq[18]:/.avg->runnable_avg_sum
1157 Â 4% +40.2% 1622 Â 7% TOTAL numa-vmstat.node2.nr_alloc_batch
19 Â 6% +51.6% 28 Â 2% TOTAL sched_debug.cpu#7.cpu_load[3]
19 Â 6% +51.5% 29 Â 1% TOTAL sched_debug.cpu#2.cpu_load[4]
19 Â 6% +53.7% 29 Â 2% TOTAL sched_debug.cpu#7.cpu_load[4]
19 Â13% +51.0% 29 Â 3% TOTAL sched_debug.cpu#19.cpu_load[2]
19 Â10% +49.5% 29 Â 4% TOTAL sched_debug.cpu#2.cpu_load[2]
19 Â13% +50.0% 28 Â 4% TOTAL sched_debug.cpu#19.cpu_load[1]
19 Â 6% +49.5% 29 Â 3% TOTAL sched_debug.cpu#2.cpu_load[3]
20 Â 8% +45.5% 29 Â 1% TOTAL sched_debug.cpu#18.cpu_load[3]
19 Â 5% +51.0% 29 Â 2% TOTAL sched_debug.cpu#20.cpu_load[4]
20 Â 7% +41.2% 28 Â 2% TOTAL sched_debug.cpu#18.cpu_load[2]
20 Â 7% +47.0% 29 Â 2% TOTAL sched_debug.cpu#4.cpu_load[4]
20 Â 5% +45.0% 29 Â 2% TOTAL sched_debug.cpu#4.cpu_load[3]
19 Â16% +50.5% 28 Â 1% TOTAL sched_debug.cpu#27.cpu_load[3]
19 Â12% +44.9% 28 Â 4% TOTAL sched_debug.cpu#16.cpu_load[0]
19 Â 5% +48.5% 29 Â 3% TOTAL sched_debug.cpu#1.cpu_load[3]
19 Â 6% +50.0% 29 Â 1% TOTAL sched_debug.cpu#20.cpu_load[3]
2924 Â10% +48.1% 4330 Â 2% TOTAL sched_debug.cpu#13.curr->pid
2942 Â 7% +45.9% 4291 Â 2% TOTAL sched_debug.cpu#15.curr->pid
2957 Â 5% +44.4% 4269 Â 1% TOTAL sched_debug.cpu#9.curr->pid
2930 Â12% +47.6% 4326 Â 3% TOTAL sched_debug.cpu#16.curr->pid
684 Â 7% +44.1% 986 Â 0% TOTAL sched_debug.cfs_rq[27]:/.tg_runnable_contrib
32158 Â 4% +40.1% 45057 Â 0% TOTAL sched_debug.cfs_rq[24]:/.avg->runnable_avg_sum
2949 Â 9% +46.3% 4315 Â 2% TOTAL sched_debug.cpu#19.curr->pid
20 Â 9% +46.5% 29 Â 1% TOTAL sched_debug.cpu#16.cpu_load[4]
19 Â14% +51.0% 29 Â 5% TOTAL sched_debug.cpu#19.cpu_load[0]
31356 Â 7% +43.8% 45098 Â 0% TOTAL sched_debug.cfs_rq[27]:/.avg->runnable_avg_sum
33090 Â 7% +36.2% 45061 Â 0% TOTAL sched_debug.cfs_rq[29]:/.avg->runnable_avg_sum
721 Â 7% +35.7% 979 Â 0% TOTAL sched_debug.cfs_rq[29]:/.tg_runnable_contrib
31372 Â 5% +43.7% 45081 Â 0% TOTAL sched_debug.cfs_rq[11]:/.avg->runnable_avg_sum
701 Â 4% +39.8% 980 Â 0% TOTAL sched_debug.cfs_rq[24]:/.tg_runnable_contrib
3005 Â 6% +44.1% 4331 Â 2% TOTAL sched_debug.cpu#28.curr->pid
722 Â 5% +36.2% 983 Â 0% TOTAL sched_debug.cfs_rq[28]:/.tg_runnable_contrib
32215 Â 6% +40.3% 45183 Â 0% TOTAL sched_debug.cfs_rq[19]:/.avg->runnable_avg_sum
32108 Â 3% +40.5% 45102 Â 0% TOTAL sched_debug.cfs_rq[13]:/.avg->runnable_avg_sum
703 Â 6% +39.9% 983 Â 0% TOTAL sched_debug.cfs_rq[19]:/.tg_runnable_contrib
33049 Â 5% +36.4% 45063 Â 0% TOTAL sched_debug.cfs_rq[28]:/.avg->runnable_avg_sum
685 Â 5% +43.1% 981 Â 0% TOTAL sched_debug.cfs_rq[11]:/.tg_runnable_contrib
702 Â 3% +39.7% 981 Â 0% TOTAL sched_debug.cfs_rq[13]:/.tg_runnable_contrib
32365 Â 1% +39.3% 45086 Â 0% TOTAL sched_debug.cfs_rq[20]:/.avg->runnable_avg_sum
706 Â 1% +38.9% 982 Â 0% TOTAL sched_debug.cfs_rq[20]:/.tg_runnable_contrib
1120854 Â41% -45.0% 616182 Â 4% TOTAL sched_debug.cpu#5.sched_count
31452 Â 3% +44.5% 45437 Â 1% TOTAL sched_debug.cfs_rq[9]:/.avg->runnable_avg_sum
32534 Â 5% +38.4% 45040 Â 0% TOTAL sched_debug.cfs_rq[31]:/.avg->runnable_avg_sum
687 Â 3% +44.1% 990 Â 1% TOTAL sched_debug.cfs_rq[9]:/.tg_runnable_contrib
702 Â 7% +40.4% 985 Â 0% TOTAL sched_debug.cfs_rq[25]:/.tg_runnable_contrib
694 Â 5% +41.6% 983 Â 0% TOTAL sched_debug.cfs_rq[14]:/.tg_runnable_contrib
19 Â24% +50.0% 28 Â 7% TOTAL sched_debug.cfs_rq[8]:/.runnable_load_avg
19 Â 9% +46.9% 28 Â 3% TOTAL sched_debug.cpu#20.cpu_load[2]
19 Â12% +42.4% 28 Â 4% TOTAL sched_debug.cpu#16.cpu_load[1]
19 Â10% +47.5% 29 Â 6% TOTAL sched_debug.cpu#2.cpu_load[1]
19 Â14% +43.9% 28 Â 2% TOTAL sched_debug.cpu#20.cpu_load[0]
20 Â13% +33.7% 27 Â 2% TOTAL sched_debug.cpu#24.cpu_load[1]
20 Â13% +50.0% 30 Â11% TOTAL sched_debug.cpu#2.cpu_load[0]
18 Â16% +48.9% 28 Â 3% TOTAL sched_debug.cpu#27.cpu_load[2]
18 Â19% +46.8% 27 Â 5% TOTAL sched_debug.cpu#26.cpu_load[1]
19 Â14% +43.3% 27 Â 4% TOTAL sched_debug.cfs_rq[30]:/.runnable_load_avg
20 Â 6% +41.6% 28 Â 4% TOTAL sched_debug.cfs_rq[4]:/.runnable_load_avg
20 Â11% +40.0% 28 Â 2% TOTAL sched_debug.cfs_rq[16]:/.runnable_load_avg
20 Â 4% +36.3% 27 Â 2% TOTAL sched_debug.cfs_rq[20]:/.runnable_load_avg
19 Â13% +44.9% 28 Â 1% TOTAL sched_debug.cpu#16.cpu_load[2]
710 Â 5% +38.1% 981 Â 0% TOTAL sched_debug.cfs_rq[31]:/.tg_runnable_contrib
32116 Â 7% +40.5% 45109 Â 0% TOTAL sched_debug.cfs_rq[25]:/.avg->runnable_avg_sum
31789 Â 5% +41.7% 45041 Â 0% TOTAL sched_debug.cfs_rq[14]:/.avg->runnable_avg_sum
31727 Â 4% +42.1% 45080 Â 0% TOTAL sched_debug.cfs_rq[17]:/.avg->runnable_avg_sum
696 Â 6% +41.2% 983 Â 0% TOTAL sched_debug.cfs_rq[0]:/.tg_runnable_contrib
31871 Â 6% +41.4% 45051 Â 0% TOTAL sched_debug.cfs_rq[0]:/.avg->runnable_avg_sum
32292 Â 5% +39.4% 45011 Â 0% TOTAL sched_debug.cfs_rq[7]:/.avg->runnable_avg_sum
699 Â 3% +40.5% 982 Â 0% TOTAL sched_debug.cfs_rq[16]:/.tg_runnable_contrib
706 Â 5% +39.0% 981 Â 0% TOTAL sched_debug.cfs_rq[7]:/.tg_runnable_contrib
3025 Â16% +43.2% 4334 Â 2% TOTAL sched_debug.cpu#10.curr->pid
32043 Â 3% +40.6% 45060 Â 0% TOTAL sched_debug.cfs_rq[16]:/.avg->runnable_avg_sum
705 Â 4% +39.8% 985 Â 0% TOTAL sched_debug.cfs_rq[3]:/.tg_runnable_contrib
32260 Â 4% +39.8% 45100 Â 0% TOTAL sched_debug.cfs_rq[3]:/.avg->runnable_avg_sum
694 Â 4% +41.3% 980 Â 0% TOTAL sched_debug.cfs_rq[17]:/.tg_runnable_contrib
32722 Â 3% +38.1% 45195 Â 0% TOTAL sched_debug.cfs_rq[2]:/.avg->runnable_avg_sum
2869 Â11% +52.0% 4362 Â 3% TOTAL sched_debug.cpu#30.curr->pid
698 Â 5% +40.6% 982 Â 0% TOTAL sched_debug.cfs_rq[23]:/.tg_runnable_contrib
32007 Â 5% +40.7% 45031 Â 0% TOTAL sched_debug.cfs_rq[23]:/.avg->runnable_avg_sum
21 Â 7% +34.3% 29 Â 3% TOTAL sched_debug.cfs_rq[18]:/.runnable_load_avg
20 Â11% +40.8% 29 Â 2% TOTAL sched_debug.cpu#24.cpu_load[3]
21 Â10% +34.6% 28 Â 4% TOTAL sched_debug.cpu#29.cpu_load[3]
21 Â10% +36.8% 29 Â 3% TOTAL sched_debug.cpu#29.cpu_load[4]
21 Â 9% +39.0% 29 Â 1% TOTAL sched_debug.cpu#24.cpu_load[4]
21 Â12% +34.9% 28 Â 5% TOTAL sched_debug.cpu#29.cpu_load[2]
20 Â11% +41.7% 29 Â 2% TOTAL sched_debug.cpu#21.cpu_load[3]
20 Â10% +45.0% 29 Â 2% TOTAL sched_debug.cpu#14.cpu_load[3]
20 Â15% +38.2% 28 Â 3% TOTAL sched_debug.cpu#14.cpu_load[1]
20 Â11% +45.0% 29 Â 2% TOTAL sched_debug.cpu#16.cpu_load[3]
23 Â10% +25.2% 28 Â 2% TOTAL sched_debug.cpu#55.load
21 Â 6% +34.6% 28 Â 2% TOTAL sched_debug.cpu#28.cpu_load[4]
20 Â 6% +38.8% 28 Â 3% TOTAL sched_debug.cfs_rq[7]:/.runnable_load_avg
20 Â11% +44.1% 29 Â 1% TOTAL sched_debug.cpu#21.cpu_load[4]
2959 Â 8% +45.8% 4315 Â 1% TOTAL sched_debug.cpu#6.curr->pid
33054 Â 2% +36.8% 45208 Â 0% TOTAL sched_debug.cfs_rq[1]:/.avg->runnable_avg_sum
715 Â 2% +37.4% 983 Â 0% TOTAL sched_debug.cfs_rq[2]:/.tg_runnable_contrib
32267 Â 5% +39.7% 45074 Â 0% TOTAL sched_debug.cfs_rq[4]:/.avg->runnable_avg_sum
705 Â 4% +38.9% 979 Â 0% TOTAL sched_debug.cfs_rq[4]:/.tg_runnable_contrib
2983 Â12% +47.3% 4394 Â 2% TOTAL sched_debug.cpu#2.curr->pid
32679 Â 5% +37.9% 45073 Â 0% TOTAL sched_debug.cfs_rq[6]:/.avg->runnable_avg_sum
722 Â 2% +35.8% 981 Â 0% TOTAL sched_debug.cfs_rq[1]:/.tg_runnable_contrib
2841 Â20% +54.9% 4399 Â 1% TOTAL sched_debug.cpu#0.curr->pid
32684 Â 5% +37.8% 45024 Â 0% TOTAL sched_debug.cfs_rq[21]:/.avg->runnable_avg_sum
714 Â 5% +37.3% 980 Â 0% TOTAL sched_debug.cfs_rq[6]:/.tg_runnable_contrib
22 Â10% +33.3% 29 Â 1% TOTAL sched_debug.cpu#12.cpu_load[4]
1044460 Â29% -42.8% 597073 Â 1% TOTAL sched_debug.cpu#31.sched_count
2962 Â10% +45.6% 4314 Â 2% TOTAL sched_debug.cpu#21.curr->pid
107755 Â 0% +35.8% 146314 Â 0% TOTAL sched_debug.cfs_rq[16]:/.exec_clock
3040 Â14% +45.8% 4431 Â 3% TOTAL sched_debug.cpu#18.curr->pid
713 Â 5% +37.3% 980 Â 0% TOTAL sched_debug.cfs_rq[21]:/.tg_runnable_contrib
107444 Â 0% +36.2% 146370 Â 0% TOTAL sched_debug.cfs_rq[8]:/.exec_clock
33921 Â 7% +32.9% 45070 Â 0% TOTAL sched_debug.cfs_rq[5]:/.avg->runnable_avg_sum
19 Â16% +41.7% 27 Â 4% TOTAL sched_debug.cpu#27.cpu_load[0]
18 Â16% +46.8% 27 Â 4% TOTAL sched_debug.cpu#27.cpu_load[1]
20 Â13% +35.3% 27 Â 4% TOTAL sched_debug.cpu#24.cpu_load[0]
110484 Â 1% +34.1% 148181 Â 0% TOTAL sched_debug.cfs_rq[10]:/.exec_clock
108447 Â 0% +34.7% 146038 Â 0% TOTAL sched_debug.cfs_rq[17]:/.exec_clock
741 Â 7% +32.3% 980 Â 0% TOTAL sched_debug.cfs_rq[5]:/.tg_runnable_contrib
3131 Â10% +40.9% 4412 Â 5% TOTAL sched_debug.cpu#29.curr->pid
108894 Â 1% +33.9% 145821 Â 0% TOTAL sched_debug.cfs_rq[24]:/.exec_clock
108897 Â 0% +34.3% 146281 Â 0% TOTAL sched_debug.cfs_rq[9]:/.exec_clock
110117 Â 2% +32.5% 145856 Â 0% TOTAL sched_debug.cfs_rq[25]:/.exec_clock
109871 Â 1% +32.8% 145922 Â 0% TOTAL sched_debug.cfs_rq[15]:/.exec_clock
110887 Â 1% +32.8% 147227 Â 0% TOTAL sched_debug.cfs_rq[12]:/.exec_clock
20 Â13% +40.2% 28 Â 4% TOTAL sched_debug.cpu#0.cpu_load[3]
21 Â 7% +30.5% 27 Â 2% TOTAL sched_debug.cfs_rq[28]:/.runnable_load_avg
20 Â 7% +37.9% 28 Â 1% TOTAL sched_debug.cpu#18.cpu_load[1]
21 Â 9% +30.6% 28 Â 2% TOTAL sched_debug.cfs_rq[29]:/.runnable_load_avg
19 Â12% +43.4% 28 Â 1% TOTAL sched_debug.cpu#20.cpu_load[1]
20 Â 9% +38.8% 28 Â 2% TOTAL sched_debug.cpu#21.cpu_load[2]
21 Â 5% +33.0% 28 Â 2% TOTAL sched_debug.cpu#28.cpu_load[3]
21 Â 5% +32.4% 27 Â 2% TOTAL sched_debug.cpu#28.cpu_load[1]
20 Â 5% +33.7% 27 Â 2% TOTAL sched_debug.cpu#28.cpu_load[2]
21 Â 9% +29.0% 27 Â 5% TOTAL sched_debug.cpu#29.cpu_load[0]
21 Â13% +35.2% 28 Â 3% TOTAL sched_debug.cpu#24.cpu_load[2]
21 Â 9% +34.3% 28 Â 1% TOTAL sched_debug.cpu#18.cpu_load[0]
3098 Â11% +39.9% 4335 Â 2% TOTAL sched_debug.cpu#25.curr->pid
109876 Â 0% +33.3% 146491 Â 0% TOTAL sched_debug.cfs_rq[11]:/.exec_clock
268016 Â 3% +32.8% 355948 Â 9% TOTAL meminfo.Committed_AS
109686 Â 1% +33.3% 146181 Â 0% TOTAL sched_debug.cfs_rq[13]:/.exec_clock
109930 Â 1% +33.3% 146529 Â 0% TOTAL sched_debug.cfs_rq[18]:/.exec_clock
110814 Â 2% +31.8% 146073 Â 0% TOTAL sched_debug.cfs_rq[22]:/.exec_clock
111231 Â 1% +31.2% 145955 Â 0% TOTAL sched_debug.cfs_rq[26]:/.exec_clock
792742 Â 2% -24.3% 599925 Â 1% TOTAL sched_debug.cpu#8.nr_switches
20 Â13% +40.6% 28 Â 2% TOTAL sched_debug.cpu#14.cpu_load[2]
22 Â11% +30.4% 29 Â 1% TOTAL sched_debug.cpu#12.cpu_load[3]
22 Â10% +32.7% 29 Â 3% TOTAL sched_debug.cfs_rq[21]:/.runnable_load_avg
22 Â11% +26.5% 28 Â 1% TOTAL sched_debug.cpu#12.cpu_load[2]
20 Â11% +43.1% 29 Â 5% TOTAL sched_debug.cpu#0.cpu_load[4]
22 Â11% +29.1% 28 Â 2% TOTAL sched_debug.cpu#12.cpu_load[1]
22 Â21% +30.0% 28 Â 2% TOTAL sched_debug.cpu#17.cpu_load[3]
896719 Â18% -31.3% 615816 Â 4% TOTAL sched_debug.cpu#30.sched_count
110467 Â 1% +32.0% 145847 Â 0% TOTAL sched_debug.cfs_rq[19]:/.exec_clock
110856 Â 1% +31.7% 145977 Â 0% TOTAL sched_debug.cfs_rq[20]:/.exec_clock
112749 Â 1% +32.1% 148960 Â 0% TOTAL sched_debug.cfs_rq[2]:/.exec_clock
110628 Â 1% +32.0% 146083 Â 0% TOTAL sched_debug.cfs_rq[23]:/.exec_clock
116500 Â 1% +31.0% 152586 Â 0% TOTAL sched_debug.cfs_rq[0]:/.exec_clock
33803 Â 6% +33.2% 45033 Â 0% TOTAL sched_debug.cfs_rq[12]:/.avg->runnable_avg_sum
738 Â 6% +32.9% 981 Â 0% TOTAL sched_debug.cfs_rq[12]:/.tg_runnable_contrib
111685 Â 0% +30.4% 145683 Â 0% TOTAL sched_debug.cfs_rq[4]:/.exec_clock
111045 Â 1% +31.3% 145845 Â 0% TOTAL sched_debug.cfs_rq[21]:/.exec_clock
111156 Â 0% +31.3% 145923 Â 0% TOTAL sched_debug.cfs_rq[14]:/.exec_clock
111731 Â 0% +30.3% 145623 Â 0% TOTAL sched_debug.cfs_rq[5]:/.exec_clock
111147 Â 1% +31.0% 145641 Â 0% TOTAL sched_debug.cfs_rq[6]:/.exec_clock
3172 Â 3% +33.8% 4245 Â 0% TOTAL sched_debug.cpu#12.curr->pid
111847 Â 0% +30.4% 145836 Â 0% TOTAL sched_debug.cfs_rq[1]:/.exec_clock
20 Â39% +43.1% 29 Â 3% TOTAL sched_debug.cpu#63.load
111241 Â 0% +31.0% 145727 Â 0% TOTAL sched_debug.cfs_rq[3]:/.exec_clock
111457 Â 1% +30.7% 145630 Â 0% TOTAL sched_debug.cfs_rq[31]:/.exec_clock
112620 Â 1% +29.4% 145693 Â 0% TOTAL sched_debug.cfs_rq[27]:/.exec_clock
112963 Â 1% +28.9% 145645 Â 0% TOTAL sched_debug.cfs_rq[29]:/.exec_clock
112421 Â 0% +29.6% 145673 Â 0% TOTAL sched_debug.cfs_rq[7]:/.exec_clock
7338 Â11% +23.3% 9050 Â 7% TOTAL numa-vmstat.node3.nr_anon_pages
29353 Â11% +23.3% 36204 Â 7% TOTAL numa-meminfo.node3.AnonPages
766344 Â 4% -22.2% 596356 Â 1% TOTAL sched_debug.cpu#12.nr_switches
112871 Â 0% +29.0% 145613 Â 0% TOTAL sched_debug.cfs_rq[28]:/.exec_clock
785724 Â 2% -22.0% 612543 Â 0% TOTAL sched_debug.cpu#0.nr_switches
770632 Â 2% -22.3% 598517 Â 1% TOTAL sched_debug.cpu#15.nr_switches
113328 Â 0% +28.5% 145614 Â 0% TOTAL sched_debug.cfs_rq[30]:/.exec_clock
19 Â16% +40.2% 27 Â 2% TOTAL sched_debug.cfs_rq[0]:/.runnable_load_avg
19 Â23% +39.2% 27 Â 4% TOTAL sched_debug.cfs_rq[26]:/.runnable_load_avg
763775 Â 3% -21.8% 596955 Â 1% TOTAL sched_debug.cpu#13.nr_switches
21 Â10% +30.6% 28 Â 4% TOTAL sched_debug.cpu#29.cpu_load[1]
20 Â10% +36.5% 28 Â 3% TOTAL sched_debug.cpu#21.cpu_load[1]
21 Â18% +31.1% 27 Â 1% TOTAL sched_debug.cpu#0.cpu_load[1]
22 Â 4% +30.0% 28 Â 4% TOTAL sched_debug.cpu#52.load
21 Â11% +30.8% 28 Â 3% TOTAL sched_debug.cpu#21.cpu_load[0]
21 Â 6% +28.4% 28 Â 3% TOTAL sched_debug.cpu#28.cpu_load[0]
21 Â19% +31.8% 28 Â 2% TOTAL sched_debug.cpu#17.cpu_load[2]
19 Â20% +45.8% 28 Â 3% TOTAL sched_debug.cpu#26.cpu_load[0]
22 Â16% +27.3% 28 Â 5% TOTAL sched_debug.cfs_rq[27]:/.runnable_load_avg
21 Â17% +29.4% 28 Â 2% TOTAL sched_debug.cpu#17.cpu_load[0]
21 Â12% +32.1% 28 Â 2% TOTAL sched_debug.cfs_rq[24]:/.runnable_load_avg
20 Â17% +34.6% 28 Â 2% TOTAL sched_debug.cpu#0.cpu_load[2]
773695 Â 2% -22.5% 599935 Â 1% TOTAL sched_debug.cpu#11.nr_switches
760832 Â 2% -21.3% 598498 Â 1% TOTAL sched_debug.cpu#9.nr_switches
23 Â17% +20.3% 28 Â 5% TOTAL sched_debug.cfs_rq[52]:/.load
761399 Â 1% -21.2% 600107 Â 1% TOTAL sched_debug.cpu#10.nr_switches
770683 Â 1% -21.1% 607908 Â 1% TOTAL sched_debug.cpu#2.nr_switches
757803 Â 4% -20.7% 600923 Â 1% TOTAL sched_debug.cpu#6.nr_switches
750665 Â 1% -19.1% 606952 Â 0% TOTAL sched_debug.cpu#1.nr_switches
744077 Â 2% -19.6% 598358 Â 0% TOTAL sched_debug.cpu#16.nr_switches
753828 Â 1% -19.6% 605817 Â 1% TOTAL sched_debug.cpu#3.nr_switches
742810 Â 2% -18.9% 602251 Â 0% TOTAL sched_debug.cpu#5.nr_switches
732991 Â 2% -18.5% 597448 Â 1% TOTAL sched_debug.cpu#17.nr_switches
725349 Â 2% -17.9% 595187 Â 1% TOTAL sched_debug.cpu#14.nr_switches
21 Â18% +29.6% 28 Â 2% TOTAL sched_debug.cpu#17.cpu_load[1]
22 Â14% +28.2% 28 Â 2% TOTAL sched_debug.cpu#12.cpu_load[0]
736992 Â 2% -17.8% 605506 Â 1% TOTAL sched_debug.cpu#4.nr_switches
724402 Â 1% -16.7% 603369 Â 0% TOTAL sched_debug.cpu#7.nr_switches
712130 Â 2% -15.5% 602079 Â 1% TOTAL sched_debug.cpu#24.nr_switches
715409 Â 5% -16.1% 600473 Â 2% TOTAL sched_debug.cpu#22.nr_switches
169672 Â 0% +19.0% 201867 Â 0% TOTAL meminfo.Active(anon)
42418 Â 0% +18.9% 50446 Â 0% TOTAL proc-vmstat.nr_active_anon
710553 Â 3% -16.0% 597062 Â 1% TOTAL sched_debug.cpu#31.nr_switches
723251 Â 2% -17.1% 599549 Â 1% TOTAL sched_debug.cpu#18.nr_switches
706093 Â 4% -14.9% 601183 Â 1% TOTAL sched_debug.cpu#26.nr_switches
704669 Â 2% -14.9% 599338 Â 1% TOTAL sched_debug.cpu#20.nr_switches
703753 Â 3% -15.3% 596238 Â 1% TOTAL sched_debug.cpu#21.nr_switches
702440 Â 5% -15.1% 596380 Â 1% TOTAL sched_debug.cpu#25.nr_switches
702660 Â 4% -14.7% 599141 Â 1% TOTAL sched_debug.cpu#19.nr_switches
22 Â14% +28.2% 28 Â 4% TOTAL sched_debug.cfs_rq[12]:/.runnable_load_avg
22 Â22% +32.4% 29 Â 1% TOTAL sched_debug.cpu#34.load
701182 Â 4% -14.6% 598606 Â 1% TOTAL sched_debug.cpu#23.nr_switches
0.23 Â 5% -15.8% 0.19 Â 6% TOTAL turbostat.%pc3
238557 Â 0% +13.5% 270767 Â 0% TOTAL meminfo.Active
135003 Â 0% +12.7% 152182 Â 0% TOTAL sched_debug.cpu#56.nr_load_updates
135518 Â 1% +12.2% 152028 Â 0% TOTAL sched_debug.cpu#63.nr_load_updates
668473 Â 4% -10.5% 598372 Â 1% TOTAL sched_debug.cpu#29.nr_switches
137963 Â 0% +11.9% 154381 Â 0% TOTAL sched_debug.cpu#57.nr_load_updates
135802 Â 0% +12.1% 152194 Â 0% TOTAL sched_debug.cpu#58.nr_load_updates
673342 Â 3% -10.3% 604167 Â 2% TOTAL sched_debug.cpu#30.nr_switches
136075 Â 0% +11.8% 152117 Â 0% TOTAL sched_debug.cpu#60.nr_load_updates
136265 Â 0% +11.6% 152124 Â 0% TOTAL sched_debug.cpu#59.nr_load_updates
136657 Â 0% +11.5% 152399 Â 0% TOTAL sched_debug.cpu#48.nr_load_updates
672367 Â 4% -10.6% 600775 Â 2% TOTAL sched_debug.cpu#27.nr_switches
136400 Â 0% +11.6% 152176 Â 0% TOTAL sched_debug.cpu#61.nr_load_updates
136146 Â 0% +11.7% 152103 Â 0% TOTAL sched_debug.cpu#62.nr_load_updates
1848 Â 2% +10.9% 2049 Â 5% TOTAL numa-meminfo.node0.Mapped
136683 Â 0% +11.4% 152199 Â 0% TOTAL sched_debug.cpu#55.nr_load_updates
137175 Â 0% +11.0% 152310 Â 0% TOTAL sched_debug.cpu#51.nr_load_updates
139156 Â 0% +10.8% 154236 Â 0% TOTAL sched_debug.cpu#49.nr_load_updates
137076 Â 0% +11.1% 152253 Â 0% TOTAL sched_debug.cpu#53.nr_load_updates
672781 Â 3% -10.5% 601982 Â 1% TOTAL sched_debug.cpu#28.nr_switches
136873 Â 1% +11.2% 152248 Â 0% TOTAL sched_debug.cpu#52.nr_load_updates
137601 Â 0% +10.8% 152404 Â 0% TOTAL sched_debug.cpu#50.nr_load_updates
137409 Â 0% +10.8% 152228 Â 0% TOTAL sched_debug.cpu#54.nr_load_updates
608198 Â 1% +5895.2% 36462984 Â 0% TOTAL time.voluntary_context_switches
62.13 Â 1% +247.4% 215.86 Â 1% TOTAL time.user_time
560843 Â 0% -53.2% 262235 Â 0% TOTAL vmstat.system.cs
54.79 Â 0% +81.7% 99.56 Â 0% TOTAL turbostat.%c0
38022 Â 0% +76.6% 67144 Â 0% TOTAL vmstat.system.in
2726 Â 0% +68.6% 4596 Â 0% TOTAL time.percent_of_cpu_this_job_got
8155 Â 0% +67.2% 13638 Â 0% TOTAL time.system_time
951709 Â 0% +53.4% 1459554 Â 0% TOTAL time.involuntary_context_switches
227030 Â 1% -16.1% 190393 Â 0% TOTAL time.minor_page_faults
qperf.sctp.bw
1.3e+09 ++---------------------------------------------------------------+
1.2e+09 ++ O O O |
O OO O O O O O O OO OO OO O O O O O |
1.1e+09 ++ OO OO O O OO OO O O O O OO O O
1e+09 ++ O |
| |
9e+08 ++ |
8e+08 ++ |
7e+08 ++ |
| |
6e+08 ++ |
5e+08 ++ |
| .**. *. *. |
4e+08 *+**.**.*. *.**.**.*.**.**.*.**.**.**.* **.*.**.* * *.** |
3e+08 ++--------*------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
To reproduce:
apt-get install ruby ruby-oj
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/setup-local job.yaml # the job file attached in this email
bin/run-local job.yaml
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Fengguang
---
testcase: netperf
default_monitors:
watch-oom:
wait: pre-test
uptime:
iostat:
vmstat:
numa-numastat:
numa-vmstat:
numa-meminfo:
proc-vmstat:
proc-stat:
meminfo:
slabinfo:
interrupts:
lock_stat:
latency_stats:
softirqs:
bdi_dev_mapping:
diskstats:
energy:
cpuidle:
cpufreq:
turbostat:
sched_debug:
interval: 10
pmeter:
model: Nehalem-EX
memory: 256G
nr_ssd_partitions: 6
ssd_partitions: "/dev/disk/by-id/ata-INTEL_SSD*part1"
swap_partitions: "/dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WCAV5F059074-part2"
runtime: 300s
nr_threads:
- 200%
perf-profile:
freq: 800
netperf:
send_size: 10K
test:
- SCTP_STREAM_MANY
branch: linus/master
commit: 19583ca584d6f574384e17fe7613dfaeadcdc4a6
repeat_to: 3
enqueue_time: 2014-09-25 21:41:16.275537552 +08:00
testbox: lkp-nex04
kconfig: x86_64-rhel
kernel: "/kernel/x86_64-rhel/19583ca584d6f574384e17fe7613dfaeadcdc4a6/vmlinuz-3.16.0"
user: lkp
queue: wfg
result_root: "/result/lkp-nex04/netperf/300s-200%-10K-SCTP_STREAM_MANY/debian-x86_64.cgz/x86_64-rhel/19583ca584d6f574384e17fe7613dfaeadcdc4a6/0"
job_file: "/lkp/scheduled/lkp-nex04/wfg_netperf-300s-200%-10K-SCTP_STREAM_MANY-x86_64-rhel-19583ca584d6f574384e17fe7613dfaeadcdc4a6-2.yaml"
dequeue_time: 2014-09-30 00:21:40.347862294 +08:00
history_time: 300
job_state: finished
loadavg: 139.56 108.61 48.96 2/529 13193
start_time: '1412007779'
end_time: '1412008083'
version: "/lkp/lkp/.src-20140929-152043"
netserver
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
netperf -t SCTP_STREAM_MANY -c -C -l 300 -- -m 10K
_______________________________________________
LKP mailing list
LKP@xxxxxxxxxxxxxxx