Re: [LKP] [sched] a15b12ac36a: -46.9% time.voluntary_context_switches +1.5% will-it-scale.per_process_ops
From: Kirill Tkhai
Date: Tue Dec 23 2014 - 04:05:50 EST
Hi, Huang,
what do these digits mean? What test does?
23.12.2014, 08:16, "Huang Ying" <ying.huang@xxxxxxxxx>:
> FYI, we noticed the below changes on
>
> commit a15b12ac36ad4e7b856a4ae54937ae26a51aebad ("sched: Do not stop cpu in set_cpus_allowed_ptr() if task is not running")
>
> testbox/testcase/testparams: lkp-g5/will-it-scale/performance-lock1
>
> 1ba93d42727c4400 Âa15b12ac36ad4e7b856a4ae549
> ---------------- Â--------------------------
> ÂÂÂÂÂÂÂÂÂ%stddev ÂÂÂÂ%change ÂÂÂÂÂÂÂÂ%stddev
> ÂÂÂÂÂÂÂÂÂÂÂÂÂ\ ÂÂÂÂÂÂÂÂÂ| ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ\
> ÂÂÂ1517261 Â Â0% ÂÂÂÂÂ+1.5% ÂÂÂ1539994 Â Â0% Âwill-it-scale.per_process_ops
> ÂÂÂÂÂÂÂ247 Â 30% ÂÂÂ+131.8% ÂÂÂÂÂÂÂ573 Â 49% Âsched_debug.cpu#61.ttwu_count
> ÂÂÂÂÂÂÂ225 Â 22% ÂÂÂ+142.8% ÂÂÂÂÂÂÂ546 Â 34% Âsched_debug.cpu#81.ttwu_local
> ÂÂÂÂÂ15115 Â 44% ÂÂÂÂ+37.3% ÂÂÂÂÂ20746 Â 40% Ânuma-meminfo.node7.Active
> ÂÂÂÂÂÂ1028 Â 38% ÂÂÂ+115.3% ÂÂÂÂÂÂ2214 Â 36% Âsched_debug.cpu#16.ttwu_local
> ÂÂÂÂÂÂÂÂÂ2 Â 19% ÂÂÂ+133.3% ÂÂÂÂÂÂÂÂÂ5 Â 43% Âsched_debug.cpu#89.cpu_load[3]
> ÂÂÂÂÂÂÂÂ21 Â 45% ÂÂÂÂ+88.2% ÂÂÂÂÂÂÂÂ40 Â 23% Âsched_debug.cfs_rq[99]:/.tg_load_contrib
> ÂÂÂÂÂÂÂ414 Â 33% ÂÂÂÂ+98.6% ÂÂÂÂÂÂÂ823 Â 28% Âsched_debug.cpu#81.ttwu_count
> ÂÂÂÂÂÂÂÂÂ4 Â 10% ÂÂÂÂ+88.2% ÂÂÂÂÂÂÂÂÂ8 Â 12% Âsched_debug.cfs_rq[33]:/.runnable_load_avg
> ÂÂÂÂÂÂÂÂ22 Â 26% ÂÂÂÂ+80.9% ÂÂÂÂÂÂÂÂ40 Â 24% Âsched_debug.cfs_rq[103]:/.tg_load_contrib
> ÂÂÂÂÂÂÂÂÂ7 Â 17% ÂÂÂÂ-41.4% ÂÂÂÂÂÂÂÂÂ4 Â 25% Âsched_debug.cfs_rq[41]:/.load
> ÂÂÂÂÂÂÂÂÂ7 Â 17% ÂÂÂÂ-37.9% ÂÂÂÂÂÂÂÂÂ4 Â 19% Âsched_debug.cpu#41.load
> ÂÂÂÂÂÂÂÂÂ3 Â 22% ÂÂÂ+106.7% ÂÂÂÂÂÂÂÂÂ7 Â 10% Âsched_debug.cfs_rq[36]:/.runnable_load_avg
> ÂÂÂÂÂÂÂ174 Â 13% ÂÂÂÂ+48.7% ÂÂÂÂÂÂÂ259 Â 31% Âsched_debug.cpu#112.ttwu_count
> ÂÂÂÂÂÂÂÂÂ4 Â 19% ÂÂÂÂ+88.9% ÂÂÂÂÂÂÂÂÂ8 Â Â5% Âsched_debug.cfs_rq[35]:/.runnable_load_avg
> ÂÂÂÂÂÂÂ260 Â 10% ÂÂÂÂ+55.6% ÂÂÂÂÂÂÂ405 Â 26% Ânuma-vmstat.node3.nr_anon_pages
> ÂÂÂÂÂÂ1042 Â 10% ÂÂÂÂ+56.0% ÂÂÂÂÂÂ1626 Â 26% Ânuma-meminfo.node3.AnonPages
> ÂÂÂÂÂÂÂÂ26 Â 22% ÂÂÂÂ+74.3% ÂÂÂÂÂÂÂÂ45 Â 16% Âsched_debug.cfs_rq[65]:/.tg_load_contrib
> ÂÂÂÂÂÂÂÂ21 Â 43% ÂÂÂÂ+71.3% ÂÂÂÂÂÂÂÂ37 Â 26% Âsched_debug.cfs_rq[100]:/.tg_load_contrib
> ÂÂÂÂÂÂ3686 Â 21% ÂÂÂÂ+40.2% ÂÂÂÂÂÂ5167 Â 19% Âsched_debug.cpu#16.ttwu_count
> ÂÂÂÂÂÂÂ142 Â Â9% ÂÂÂÂ+34.4% ÂÂÂÂÂÂÂ191 Â 24% Âsched_debug.cpu#112.ttwu_local
> ÂÂÂÂÂÂÂÂÂ5 Â 18% ÂÂÂÂ+69.6% ÂÂÂÂÂÂÂÂÂ9 Â 15% Âsched_debug.cfs_rq[35]:/.load
> ÂÂÂÂÂÂÂÂÂ2 Â 30% ÂÂÂ+100.0% ÂÂÂÂÂÂÂÂÂ5 Â 37% Âsched_debug.cpu#106.cpu_load[1]
> ÂÂÂÂÂÂÂÂÂ3 Â 23% ÂÂÂ+100.0% ÂÂÂÂÂÂÂÂÂ6 Â 48% Âsched_debug.cpu#106.cpu_load[2]
> ÂÂÂÂÂÂÂÂÂ5 Â 18% ÂÂÂÂ+69.6% ÂÂÂÂÂÂÂÂÂ9 Â 15% Âsched_debug.cpu#35.load
> ÂÂÂÂÂÂÂÂÂ9 Â 20% ÂÂÂÂ+48.6% ÂÂÂÂÂÂÂÂ13 Â 16% Âsched_debug.cfs_rq[7]:/.runnable_load_avg
> ÂÂÂÂÂÂ1727 Â 15% ÂÂÂÂ+43.9% ÂÂÂÂÂÂ2484 Â 30% Âsched_debug.cpu#34.ttwu_local
> ÂÂÂÂÂÂÂÂ10 Â 17% ÂÂÂÂ-40.5% ÂÂÂÂÂÂÂÂÂ6 Â 13% Âsched_debug.cpu#41.cpu_load[0]
> ÂÂÂÂÂÂÂÂ10 Â 14% ÂÂÂÂ-29.3% ÂÂÂÂÂÂÂÂÂ7 Â Â5% Âsched_debug.cpu#45.cpu_load[4]
> ÂÂÂÂÂÂÂÂ10 Â 17% ÂÂÂÂ-33.3% ÂÂÂÂÂÂÂÂÂ7 Â 10% Âsched_debug.cpu#41.cpu_load[1]
> ÂÂÂÂÂÂ6121 Â Â8% ÂÂÂÂ+56.7% ÂÂÂÂÂÂ9595 Â 30% Âsched_debug.cpu#13.sched_goidle
> ÂÂÂÂÂÂÂÂ13 Â Â8% ÂÂÂÂ-25.9% ÂÂÂÂÂÂÂÂ10 Â 17% Âsched_debug.cpu#39.cpu_load[2]
> ÂÂÂÂÂÂÂÂ12 Â 16% ÂÂÂÂ-24.0% ÂÂÂÂÂÂÂÂÂ9 Â 15% Âsched_debug.cpu#37.cpu_load[2]
> ÂÂÂÂÂÂÂ492 Â 17% ÂÂÂÂ-21.3% ÂÂÂÂÂÂÂ387 Â 24% Âsched_debug.cpu#46.ttwu_count
> ÂÂÂÂÂÂ3761 Â 11% ÂÂÂÂ-23.9% ÂÂÂÂÂÂ2863 Â 15% Âsched_debug.cpu#93.curr->pid
> ÂÂÂÂÂÂÂ570 Â 19% ÂÂÂÂ+43.2% ÂÂÂÂÂÂÂ816 Â 17% Âsched_debug.cpu#86.ttwu_count
> ÂÂÂÂÂÂ5279 Â Â8% ÂÂÂÂ+63.5% ÂÂÂÂÂÂ8631 Â 33% Âsched_debug.cpu#13.ttwu_count
> ÂÂÂÂÂÂÂ377 Â 22% ÂÂÂÂ-28.6% ÂÂÂÂÂÂÂ269 Â 14% Âsched_debug.cpu#46.ttwu_local
> ÂÂÂÂÂÂ5396 Â 10% ÂÂÂÂ+29.9% ÂÂÂÂÂÂ7007 Â 14% Âsched_debug.cpu#16.sched_goidle
> ÂÂÂÂÂÂ1959 Â 12% ÂÂÂÂ+36.9% ÂÂÂÂÂÂ2683 Â 15% Ânuma-vmstat.node2.nr_slab_reclaimable
> ÂÂÂÂÂÂ7839 Â 12% ÂÂÂÂ+37.0% ÂÂÂÂÂ10736 Â 15% Ânuma-meminfo.node2.SReclaimable
> ÂÂÂÂÂÂÂÂÂ5 Â 15% ÂÂÂÂ+66.7% ÂÂÂÂÂÂÂÂÂ8 Â Â9% Âsched_debug.cfs_rq[33]:/.load
> ÂÂÂÂÂÂÂÂÂ5 Â 25% ÂÂÂÂ+47.8% ÂÂÂÂÂÂÂÂÂ8 Â 10% Âsched_debug.cfs_rq[37]:/.load
> ÂÂÂÂÂÂÂÂÂ2 Â Â0% ÂÂÂÂ+87.5% ÂÂÂÂÂÂÂÂÂ3 Â 34% Âsched_debug.cpu#89.cpu_load[4]
> ÂÂÂÂÂÂÂÂÂ5 Â 15% ÂÂÂÂ+66.7% ÂÂÂÂÂÂÂÂÂ8 Â Â9% Âsched_debug.cpu#33.load
> ÂÂÂÂÂÂÂÂÂ6 Â 23% ÂÂÂÂ+41.7% ÂÂÂÂÂÂÂÂÂ8 Â 10% Âsched_debug.cpu#37.load
> ÂÂÂÂÂÂÂÂÂ8 Â 10% ÂÂÂÂ-26.5% ÂÂÂÂÂÂÂÂÂ6 Â Â6% Âsched_debug.cpu#51.cpu_load[1]
> ÂÂÂÂÂÂ7300 Â 37% ÂÂÂÂ+63.6% ÂÂÂÂÂ11943 Â 16% Âsoftirqs.TASKLET
> ÂÂÂÂÂÂ2984 Â Â6% ÂÂÂÂ+43.1% ÂÂÂÂÂÂ4271 Â 23% Âsched_debug.cpu#20.ttwu_count
> ÂÂÂÂÂÂÂ328 Â Â4% ÂÂÂÂ+40.5% ÂÂÂÂÂÂÂ462 Â 25% Âsched_debug.cpu#26.ttwu_local
> ÂÂÂÂÂÂÂÂ10 Â Â7% ÂÂÂÂ-27.5% ÂÂÂÂÂÂÂÂÂ7 Â Â5% Âsched_debug.cpu#43.cpu_load[3]
> ÂÂÂÂÂÂÂÂÂ9 Â Â8% ÂÂÂÂ-30.8% ÂÂÂÂÂÂÂÂÂ6 Â Â6% Âsched_debug.cpu#41.cpu_load[3]
> ÂÂÂÂÂÂÂÂÂ9 Â Â8% ÂÂÂÂ-27.0% ÂÂÂÂÂÂÂÂÂ6 Â Â6% Âsched_debug.cpu#41.cpu_load[4]
> ÂÂÂÂÂÂÂÂ10 Â 14% ÂÂÂÂ-32.5% ÂÂÂÂÂÂÂÂÂ6 Â Â6% Âsched_debug.cpu#41.cpu_load[2]
> ÂÂÂÂÂ16292 Â Â6% ÂÂÂÂ+42.8% ÂÂÂÂÂ23260 Â 25% Âsched_debug.cpu#13.nr_switches
> ÂÂÂÂÂÂÂÂ14 Â 28% ÂÂÂÂ+55.9% ÂÂÂÂÂÂÂÂ23 Â Â8% Âsched_debug.cpu#99.cpu_load[0]
> ÂÂÂÂÂÂÂÂÂ5 Â Â8% ÂÂÂÂ+28.6% ÂÂÂÂÂÂÂÂÂ6 Â 12% Âsched_debug.cpu#17.load
> ÂÂÂÂÂÂÂÂ13 Â Â7% ÂÂÂÂ-23.1% ÂÂÂÂÂÂÂÂ10 Â 12% Âsched_debug.cpu#39.cpu_load[3]
> ÂÂÂÂÂÂÂÂÂ7 Â 10% ÂÂÂÂ-35.7% ÂÂÂÂÂÂÂÂÂ4 Â 11% Âsched_debug.cfs_rq[45]:/.runnable_load_avg
> ÂÂÂÂÂÂ5076 Â 13% ÂÂÂÂ-21.8% ÂÂÂÂÂÂ3970 Â 11% Ânuma-vmstat.node0.nr_slab_unreclaimable
> ÂÂÂÂÂ20306 Â 13% ÂÂÂÂ-21.8% ÂÂÂÂÂ15886 Â 11% Ânuma-meminfo.node0.SUnreclaim
> ÂÂÂÂÂÂÂÂ10 Â 10% ÂÂÂÂ-28.6% ÂÂÂÂÂÂÂÂÂ7 Â Â6% Âsched_debug.cpu#45.cpu_load[3]
> ÂÂÂÂÂÂÂÂ11 Â 11% ÂÂÂÂ-29.5% ÂÂÂÂÂÂÂÂÂ7 Â 14% Âsched_debug.cpu#45.cpu_load[1]
> ÂÂÂÂÂÂÂÂ10 Â 12% ÂÂÂÂ-26.8% ÂÂÂÂÂÂÂÂÂ7 Â Â6% Âsched_debug.cpu#44.cpu_load[1]
> ÂÂÂÂÂÂÂÂ10 Â 10% ÂÂÂÂ-28.6% ÂÂÂÂÂÂÂÂÂ7 Â Â6% Âsched_debug.cpu#44.cpu_load[0]
> ÂÂÂÂÂÂÂÂÂ7 Â 17% ÂÂÂÂ+48.3% ÂÂÂÂÂÂÂÂ10 Â Â7% Âsched_debug.cfs_rq[11]:/.runnable_load_avg
> ÂÂÂÂÂÂÂÂ11 Â 12% ÂÂÂÂ-34.1% ÂÂÂÂÂÂÂÂÂ7 Â 11% Âsched_debug.cpu#47.cpu_load[0]
> ÂÂÂÂÂÂÂÂ10 Â 10% ÂÂÂÂ-27.9% ÂÂÂÂÂÂÂÂÂ7 Â Â5% Âsched_debug.cpu#47.cpu_load[1]
> ÂÂÂÂÂÂÂÂ10 Â Â8% ÂÂÂÂ-26.8% ÂÂÂÂÂÂÂÂÂ7 Â 11% Âsched_debug.cpu#47.cpu_load[2]
> ÂÂÂÂÂÂÂÂ10 Â Â8% ÂÂÂÂ-28.6% ÂÂÂÂÂÂÂÂÂ7 Â 14% Âsched_debug.cpu#43.cpu_load[0]
> ÂÂÂÂÂÂÂÂ10 Â 10% ÂÂÂÂ-27.9% ÂÂÂÂÂÂÂÂÂ7 Â 10% Âsched_debug.cpu#43.cpu_load[1]
> ÂÂÂÂÂÂÂÂ10 Â 10% ÂÂÂÂ-28.6% ÂÂÂÂÂÂÂÂÂ7 Â Â6% Âsched_debug.cpu#43.cpu_load[2]
> ÂÂÂÂÂ12940 Â Â3% ÂÂÂÂ+49.8% ÂÂÂÂÂ19387 Â 35% Ânuma-meminfo.node2.Active(anon)
> ÂÂÂÂÂÂ3235 Â Â2% ÂÂÂÂ+49.8% ÂÂÂÂÂÂ4844 Â 35% Ânuma-vmstat.node2.nr_active_anon
> ÂÂÂÂÂÂÂÂ17 Â 17% ÂÂÂÂ+36.6% ÂÂÂÂÂÂÂÂ24 Â Â9% Âsched_debug.cpu#97.cpu_load[2]
> ÂÂÂÂÂ14725 Â Â8% ÂÂÂÂ+21.8% ÂÂÂÂÂ17928 Â 11% Âsched_debug.cpu#16.nr_switches
> ÂÂÂÂÂÂÂ667 Â 10% ÂÂÂÂ+45.3% ÂÂÂÂÂÂÂ969 Â 22% Âsched_debug.cpu#17.ttwu_local
> ÂÂÂÂÂÂ3257 Â Â5% ÂÂÂÂ+22.4% ÂÂÂÂÂÂ3988 Â 11% Âsched_debug.cpu#118.curr->pid
> ÂÂÂÂÂÂ3144 Â 15% ÂÂÂÂ-20.7% ÂÂÂÂÂÂ2493 Â Â8% Âsched_debug.cpu#95.curr->pid
> ÂÂÂÂÂÂ2192 Â 11% ÂÂÂÂ+50.9% ÂÂÂÂÂÂ3308 Â 37% Âsched_debug.cpu#18.ttwu_count
> ÂÂÂÂÂÂÂÂÂ6 Â 11% ÂÂÂÂ+37.5% ÂÂÂÂÂÂÂÂÂ8 Â 19% Âsched_debug.cfs_rq[22]:/.load
> ÂÂÂÂÂÂÂÂ12 Â Â5% ÂÂÂÂ+27.1% ÂÂÂÂÂÂÂÂ15 Â Â8% Âsched_debug.cpu#5.cpu_load[1]
> ÂÂÂÂÂÂÂÂ11 Â 12% ÂÂÂÂ-23.4% ÂÂÂÂÂÂÂÂÂ9 Â 13% Âsched_debug.cpu#37.cpu_load[3]
> ÂÂÂÂÂÂÂÂÂ6 Â 11% ÂÂÂÂ+37.5% ÂÂÂÂÂÂÂÂÂ8 Â 19% Âsched_debug.cpu#22.load
> ÂÂÂÂÂÂÂÂÂ8 Â Â8% ÂÂÂÂ-25.0% ÂÂÂÂÂÂÂÂÂ6 Â Â0% Âsched_debug.cpu#51.cpu_load[2]
> ÂÂÂÂÂÂÂÂÂ7 Â Â6% ÂÂÂÂ-20.0% ÂÂÂÂÂÂÂÂÂ6 Â 11% Âsched_debug.cpu#55.cpu_load[3]
> ÂÂÂÂÂÂÂÂ11 Â Â9% ÂÂÂÂ-17.4% ÂÂÂÂÂÂÂÂÂ9 Â Â9% Âsched_debug.cpu#39.cpu_load[4]
> ÂÂÂÂÂÂÂÂ12 Â Â5% ÂÂÂÂ-22.9% ÂÂÂÂÂÂÂÂÂ9 Â 11% Âsched_debug.cpu#38.cpu_load[3]
> ÂÂÂÂÂÂÂ420 Â 13% ÂÂÂÂ+43.0% ÂÂÂÂÂÂÂ601 Â Â9% Âsched_debug.cpu#30.ttwu_local
> ÂÂÂÂÂÂ1682 Â 14% ÂÂÂÂ+38.5% ÂÂÂÂÂÂ2329 Â 17% Ânuma-meminfo.node7.AnonPages
> ÂÂÂÂÂÂÂ423 Â 13% ÂÂÂÂ+37.0% ÂÂÂÂÂÂÂ579 Â 16% Ânuma-vmstat.node7.nr_anon_pages
> ÂÂÂÂÂÂÂÂ15 Â 13% ÂÂÂÂ+41.9% ÂÂÂÂÂÂÂÂ22 Â Â5% Âsched_debug.cpu#99.cpu_load[1]
> ÂÂÂÂÂÂÂÂÂ6 Â 20% ÂÂÂÂ+44.0% ÂÂÂÂÂÂÂÂÂ9 Â 13% Âsched_debug.cfs_rq[19]:/.runnable_load_avg
> ÂÂÂÂÂÂÂÂÂ9 Â Â4% ÂÂÂÂ-24.3% ÂÂÂÂÂÂÂÂÂ7 Â Â0% Âsched_debug.cpu#43.cpu_load[4]
> ÂÂÂÂÂÂ6341 Â Â7% ÂÂÂÂ-19.6% ÂÂÂÂÂÂ5100 Â 16% Âsched_debug.cpu#43.curr->pid
> ÂÂÂÂÂÂ2577 Â 11% ÂÂÂÂ-11.9% ÂÂÂÂÂÂ2270 Â 10% Âsched_debug.cpu#33.ttwu_count
> ÂÂÂÂÂÂÂÂ13 Â Â6% ÂÂÂÂ-18.5% ÂÂÂÂÂÂÂÂ11 Â 12% Âsched_debug.cpu#40.cpu_load[2]
> ÂÂÂÂÂÂ4828 Â Â6% ÂÂÂÂ+23.8% ÂÂÂÂÂÂ5979 Â Â6% Âsched_debug.cpu#34.curr->pid
> ÂÂÂÂÂÂ4351 Â 12% ÂÂÂÂ+33.9% ÂÂÂÂÂÂ5824 Â 12% Âsched_debug.cpu#36.curr->pid
> ÂÂÂÂÂÂÂÂ10 Â Â8% ÂÂÂÂ-23.8% ÂÂÂÂÂÂÂÂÂ8 Â Â8% Âsched_debug.cpu#37.cpu_load[4]
> ÂÂÂÂÂÂÂÂ10 Â 14% ÂÂÂÂ-28.6% ÂÂÂÂÂÂÂÂÂ7 Â Â6% Âsched_debug.cpu#45.cpu_load[2]
> ÂÂÂÂÂÂÂÂ17 Â 22% ÂÂÂÂ+40.6% ÂÂÂÂÂÂÂÂ24 Â Â7% Âsched_debug.cpu#97.cpu_load[1]
> ÂÂÂÂÂÂÂÂ11 Â Â9% ÂÂÂÂ+21.3% ÂÂÂÂÂÂÂÂ14 Â Â5% Âsched_debug.cpu#7.cpu_load[2]
> ÂÂÂÂÂÂÂÂ10 Â Â8% ÂÂÂÂ-26.2% ÂÂÂÂÂÂÂÂÂ7 Â 10% Âsched_debug.cpu#36.cpu_load[4]
> ÂÂÂÂÂ12853 Â Â2% ÂÂÂÂ+20.0% ÂÂÂÂÂ15429 Â 11% Ânuma-meminfo.node2.AnonPages
> ÂÂÂÂÂÂ4744 Â Â8% ÂÂÂÂ+30.8% ÂÂÂÂÂÂ6204 Â 11% Âsched_debug.cpu#35.curr->pid
> ÂÂÂÂÂÂ3214 Â Â2% ÂÂÂÂ+20.0% ÂÂÂÂÂÂ3856 Â 11% Ânuma-vmstat.node2.nr_anon_pages
> ÂÂÂÂÂÂ6181 Â Â6% ÂÂÂÂ+24.9% ÂÂÂÂÂÂ7718 Â Â9% Âsched_debug.cpu#13.curr->pid
> ÂÂÂÂÂÂ6675 Â 23% ÂÂÂÂ+27.5% ÂÂÂÂÂÂ8510 Â 10% Âsched_debug.cfs_rq[91]:/.tg_load_avg
> ÂÂÂÂ171261 Â Â5% ÂÂÂÂ-22.2% ÂÂÂÂ133177 Â 15% Ânuma-numastat.node0.local_node
> ÂÂÂÂÂÂ6589 Â 21% ÂÂÂÂ+29.3% ÂÂÂÂÂÂ8522 Â 11% Âsched_debug.cfs_rq[89]:/.tg_load_avg
> ÂÂÂÂÂÂ6508 Â 20% ÂÂÂÂ+28.0% ÂÂÂÂÂÂ8331 Â Â8% Âsched_debug.cfs_rq[88]:/.tg_load_avg
> ÂÂÂÂÂÂ6598 Â 22% ÂÂÂÂ+29.2% ÂÂÂÂÂÂ8525 Â 11% Âsched_debug.cfs_rq[90]:/.tg_load_avg
> ÂÂÂÂÂÂÂ590 Â 13% ÂÂÂÂ-21.4% ÂÂÂÂÂÂÂ464 Â Â7% Âsched_debug.cpu#105.ttwu_local
> ÂÂÂÂ175392 Â Â5% ÂÂÂÂ-21.7% ÂÂÂÂ137308 Â 14% Ânuma-numastat.node0.numa_hit
> ÂÂÂÂÂÂÂÂ11 Â Â6% ÂÂÂÂ-18.2% ÂÂÂÂÂÂÂÂÂ9 Â Â7% Âsched_debug.cpu#38.cpu_load[4]
> ÂÂÂÂÂÂ6643 Â 23% ÂÂÂÂ+27.4% ÂÂÂÂÂÂ8465 Â 10% Âsched_debug.cfs_rq[94]:/.tg_load_avg
> ÂÂÂÂÂÂ6764 Â Â7% ÂÂÂÂ+13.8% ÂÂÂÂÂÂ7695 Â Â7% Âsched_debug.cpu#12.curr->pid
> ÂÂÂÂÂÂÂÂ29 Â 28% ÂÂÂÂ+34.5% ÂÂÂÂÂÂÂÂ39 Â Â5% Âsched_debug.cfs_rq[98]:/.tg_load_contrib
> ÂÂÂÂÂÂ1776 Â Â7% ÂÂÂÂ+29.4% ÂÂÂÂÂÂ2298 Â 13% Âsched_debug.cpu#11.ttwu_local
> ÂÂÂÂÂÂÂÂ13 Â Â0% ÂÂÂÂ-19.2% ÂÂÂÂÂÂÂÂ10 Â Â8% Âsched_debug.cpu#40.cpu_load[3]
> ÂÂÂÂÂÂÂÂÂ7 Â Â5% ÂÂÂÂ-17.2% ÂÂÂÂÂÂÂÂÂ6 Â Â0% Âsched_debug.cpu#51.cpu_load[3]
> ÂÂÂÂÂÂ7371 Â 20% ÂÂÂÂ-18.0% ÂÂÂÂÂÂ6045 Â Â3% Âsched_debug.cpu#1.sched_goidle
> ÂÂÂÂÂ26560 Â Â2% ÂÂÂÂ+14.0% ÂÂÂÂÂ30287 Â Â7% Ânuma-meminfo.node2.Slab
> ÂÂÂÂÂ16161 Â Â6% ÂÂÂÂÂ-9.4% ÂÂÂÂÂ14646 Â Â1% Âsched_debug.cfs_rq[27]:/.avg->runnable_avg_sum
> ÂÂÂÂÂÂÂ351 Â Â6% ÂÂÂÂÂ-9.3% ÂÂÂÂÂÂÂ318 Â Â1% Âsched_debug.cfs_rq[27]:/.tg_runnable_contrib
> ÂÂÂÂÂÂ7753 Â 27% ÂÂÂÂ-22.9% ÂÂÂÂÂÂ5976 Â Â5% Âsched_debug.cpu#2.sched_goidle
> ÂÂÂÂÂÂ3828 Â Â9% ÂÂÂÂ+17.3% ÂÂÂÂÂÂ4490 Â Â6% Âsched_debug.cpu#23.sched_goidle
> ÂÂÂÂÂ23925 Â Â2% ÂÂÂÂ+23.0% ÂÂÂÂÂ29419 Â 23% Ânuma-meminfo.node2.Active
> ÂÂÂÂÂÂÂÂ47 Â Â6% ÂÂÂÂ-15.8% ÂÂÂÂÂÂÂÂ40 Â 19% Âsched_debug.cpu#42.cpu_load[1]
> ÂÂÂÂÂÂÂ282 Â Â5% ÂÂÂÂÂ-9.7% ÂÂÂÂÂÂÂ254 Â Â7% Âsched_debug.cfs_rq[109]:/.tg_runnable_contrib
> ÂÂÂÂÂÂÂ349 Â Â5% ÂÂÂÂÂ-9.3% ÂÂÂÂÂÂÂ317 Â Â1% Âsched_debug.cfs_rq[26]:/.tg_runnable_contrib
> ÂÂÂÂÂÂ6941 Â Â3% ÂÂÂÂÂ+8.9% ÂÂÂÂÂÂ7558 Â Â7% Âsched_debug.cpu#61.nr_switches
> ÂÂÂÂÂ16051 Â Â5% ÂÂÂÂÂ-8.9% ÂÂÂÂÂ14618 Â Â1% Âsched_debug.cfs_rq[26]:/.avg->runnable_avg_sum
> ÂÂÂÂ238944 Â Â3% ÂÂÂÂÂ+9.2% ÂÂÂÂ260958 Â Â5% Ânuma-vmstat.node2.numa_local
> ÂÂÂÂÂ12966 Â Â5% ÂÂÂÂÂ-9.5% ÂÂÂÂÂ11732 Â Â6% Âsched_debug.cfs_rq[109]:/.avg->runnable_avg_sum
> ÂÂÂÂÂÂ1004 Â Â3% ÂÂÂÂÂ+8.2% ÂÂÂÂÂÂ1086 Â Â4% Âsched_debug.cpu#118.sched_goidle
> ÂÂÂÂÂ20746 Â Â4% ÂÂÂÂÂ-8.4% ÂÂÂÂÂ19000 Â Â1% Âsched_debug.cfs_rq[45]:/.avg->runnable_avg_sum
> ÂÂÂÂÂÂÂ451 Â Â4% ÂÂÂÂÂ-8.3% ÂÂÂÂÂÂÂ413 Â Â1% Âsched_debug.cfs_rq[45]:/.tg_runnable_contrib
> ÂÂÂÂÂÂ3538 Â Â4% ÂÂÂÂ+17.2% ÂÂÂÂÂÂ4147 Â Â8% Âsched_debug.cpu#26.ttwu_count
> ÂÂÂÂÂÂÂÂ16 Â Â9% ÂÂÂÂ+13.8% ÂÂÂÂÂÂÂÂ18 Â Â2% Âsched_debug.cpu#99.cpu_load[3]
> ÂÂÂÂÂÂ1531 Â Â0% ÂÂÂÂ+11.3% ÂÂÂÂÂÂ1704 Â Â1% Ânuma-meminfo.node7.KernelStack
> ÂÂÂÂÂÂ3569 Â Â3% ÂÂÂÂ+17.2% ÂÂÂÂÂÂ4182 Â 10% Âsched_debug.cpu#24.sched_goidle
> ÂÂÂÂÂÂ1820 Â Â3% ÂÂÂÂ-12.5% ÂÂÂÂÂÂ1594 Â Â8% Âslabinfo.taskstats.num_objs
> ÂÂÂÂÂÂ1819 Â Â3% ÂÂÂÂ-12.4% ÂÂÂÂÂÂ1594 Â Â8% Âslabinfo.taskstats.active_objs
> ÂÂÂÂÂÂ4006 Â Â5% ÂÂÂÂ+19.1% ÂÂÂÂÂÂ4769 Â Â8% Âsched_debug.cpu#17.sched_goidle
> ÂÂÂÂÂ21412 Â 19% ÂÂÂÂ-17.0% ÂÂÂÂÂ17779 Â Â3% Âsched_debug.cpu#2.nr_switches
> ÂÂÂÂÂÂÂÂ16 Â Â9% ÂÂÂÂ+24.2% ÂÂÂÂÂÂÂÂ20 Â Â4% Âsched_debug.cpu#99.cpu_load[2]
> ÂÂÂÂÂ10493 Â Â7% ÂÂÂÂ+13.3% ÂÂÂÂÂ11890 Â Â4% Âsched_debug.cpu#23.nr_switches
> ÂÂÂÂÂÂ1207 Â Â2% ÂÂÂÂ-46.9% ÂÂÂÂÂÂÂ640 Â Â4% Âtime.voluntary_context_switches
>
> ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂtime.voluntary_context_switches
>
> ÂÂ1300 ++-----------*--*--------------------*-------------------------------+
> ÂÂÂÂÂÂÂ*..*.*..*.. + ÂÂÂÂÂ*.*..*..*.*..*..* ÂÂÂÂ.*..*..*. Â.*..*.*..*.. ÂÂÂÂ|
> ÂÂ1200 ++ ÂÂÂÂÂÂÂÂ* ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ* ÂÂÂÂÂÂÂÂ*. ÂÂÂÂÂÂÂÂÂÂÂ*.*..*
> ÂÂ1100 ++ ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ|
> ÂÂÂÂÂÂÂ| ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ|
> ÂÂ1000 ++ ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ|
> ÂÂÂÂÂÂÂ| ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ|
> ÂÂÂ900 ++ ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ|
> ÂÂÂÂÂÂÂ| ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ|
> ÂÂÂ800 ++ ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ|
> ÂÂÂ700 ++ ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ|
> ÂÂÂÂÂÂÂO ÂÂÂO ÂÂÂÂO ÂÂÂÂÂÂO O ÂO ÂÂÂÂÂÂO ÂO O ÂO O ÂÂÂÂÂÂO ÂÂÂÂÂÂO ÂÂÂÂÂÂÂÂÂ|
> ÂÂÂ600 ++ O ÂÂÂO ÂÂÂO ÂO ÂÂÂÂÂÂÂÂÂO O ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂO ÂÂÂO ÂO ÂÂÂÂÂÂÂÂÂÂÂ|
> ÂÂÂÂÂÂÂ| ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂO ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ|
> ÂÂÂ500 ++-------------------------------------------------------------------+
>
> ÂÂÂÂÂÂÂÂ[*] bisect-good sample
> ÂÂÂÂÂÂÂÂ[O] bisect-bad Âsample
>
> To reproduce:
>
> ÂÂÂÂÂÂÂÂapt-get install ruby ruby-oj
> ÂÂÂÂÂÂÂÂgit clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
> ÂÂÂÂÂÂÂÂcd lkp-tests
> ÂÂÂÂÂÂÂÂbin/setup-local job.yaml # the job file attached in this email
> ÂÂÂÂÂÂÂÂbin/run-local ÂÂjob.yaml
>
> Disclaimer:
> Results have been estimated based on internal Intel analysis and are provided
> for informational purposes only. Any difference in system hardware or software
> design or configuration may affect actual performance.
>
Regards,
Kirill
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/