Re: [LKP] [sched] a15b12ac36a: -46.9% time.voluntary_context_switches +1.5% will-it-scale.per_process_ops

From: Huang Ying
Date: Sat Jan 03 2015 - 19:39:56 EST


Hi, Kirill,

Sorry for late.

On Tue, 2014-12-23 at 11:57 +0300, Kirill Tkhai wrote:
> Hi, Huang,
>
> what do these digits mean? What test does?
>
> 23.12.2014, 08:16, "Huang Ying" <ying.huang@xxxxxxxxx>:
> > FYI, we noticed the below changes on
> >
> > commit a15b12ac36ad4e7b856a4ae54937ae26a51aebad ("sched: Do not stop cpu in set_cpus_allowed_ptr() if task is not running")
> >
> > testbox/testcase/testparams: lkp-g5/will-it-scale/performance-lock1
> >
> > 1ba93d42727c4400 a15b12ac36ad4e7b856a4ae549
> > ---------------- --------------------------

Above is the good commit and the bad commit.

> > %stddev %change %stddev
> > \ | \
> > 1517261 Â 0% +1.5% 1539994 Â 0% will-it-scale.per_process_ops

We have basic description of data above, where %stddev is standard
deviation.

What's more do you want?

Best Regards,
Huang, Ying

> > 247 Â 30% +131.8% 573 Â 49% sched_debug.cpu#61.ttwu_count
> > 225 Â 22% +142.8% 546 Â 34% sched_debug.cpu#81.ttwu_local
> > 15115 Â 44% +37.3% 20746 Â 40% numa-meminfo.node7.Active
> > 1028 Â 38% +115.3% 2214 Â 36% sched_debug.cpu#16.ttwu_local
> > 2 Â 19% +133.3% 5 Â 43% sched_debug.cpu#89.cpu_load[3]
> > 21 Â 45% +88.2% 40 Â 23% sched_debug.cfs_rq[99]:/.tg_load_contrib
> > 414 Â 33% +98.6% 823 Â 28% sched_debug.cpu#81.ttwu_count
> > 4 Â 10% +88.2% 8 Â 12% sched_debug.cfs_rq[33]:/.runnable_load_avg
> > 22 Â 26% +80.9% 40 Â 24% sched_debug.cfs_rq[103]:/.tg_load_contrib
> > 7 Â 17% -41.4% 4 Â 25% sched_debug.cfs_rq[41]:/.load
> > 7 Â 17% -37.9% 4 Â 19% sched_debug.cpu#41.load
> > 3 Â 22% +106.7% 7 Â 10% sched_debug.cfs_rq[36]:/.runnable_load_avg
> > 174 Â 13% +48.7% 259 Â 31% sched_debug.cpu#112.ttwu_count
> > 4 Â 19% +88.9% 8 Â 5% sched_debug.cfs_rq[35]:/.runnable_load_avg
> > 260 Â 10% +55.6% 405 Â 26% numa-vmstat.node3.nr_anon_pages
> > 1042 Â 10% +56.0% 1626 Â 26% numa-meminfo.node3.AnonPages
> > 26 Â 22% +74.3% 45 Â 16% sched_debug.cfs_rq[65]:/.tg_load_contrib
> > 21 Â 43% +71.3% 37 Â 26% sched_debug.cfs_rq[100]:/.tg_load_contrib
> > 3686 Â 21% +40.2% 5167 Â 19% sched_debug.cpu#16.ttwu_count
> > 142 Â 9% +34.4% 191 Â 24% sched_debug.cpu#112.ttwu_local
> > 5 Â 18% +69.6% 9 Â 15% sched_debug.cfs_rq[35]:/.load
> > 2 Â 30% +100.0% 5 Â 37% sched_debug.cpu#106.cpu_load[1]
> > 3 Â 23% +100.0% 6 Â 48% sched_debug.cpu#106.cpu_load[2]
> > 5 Â 18% +69.6% 9 Â 15% sched_debug.cpu#35.load
> > 9 Â 20% +48.6% 13 Â 16% sched_debug.cfs_rq[7]:/.runnable_load_avg
> > 1727 Â 15% +43.9% 2484 Â 30% sched_debug.cpu#34.ttwu_local
> > 10 Â 17% -40.5% 6 Â 13% sched_debug.cpu#41.cpu_load[0]
> > 10 Â 14% -29.3% 7 Â 5% sched_debug.cpu#45.cpu_load[4]
> > 10 Â 17% -33.3% 7 Â 10% sched_debug.cpu#41.cpu_load[1]
> > 6121 Â 8% +56.7% 9595 Â 30% sched_debug.cpu#13.sched_goidle
> > 13 Â 8% -25.9% 10 Â 17% sched_debug.cpu#39.cpu_load[2]
> > 12 Â 16% -24.0% 9 Â 15% sched_debug.cpu#37.cpu_load[2]
> > 492 Â 17% -21.3% 387 Â 24% sched_debug.cpu#46.ttwu_count
> > 3761 Â 11% -23.9% 2863 Â 15% sched_debug.cpu#93.curr->pid
> > 570 Â 19% +43.2% 816 Â 17% sched_debug.cpu#86.ttwu_count
> > 5279 Â 8% +63.5% 8631 Â 33% sched_debug.cpu#13.ttwu_count
> > 377 Â 22% -28.6% 269 Â 14% sched_debug.cpu#46.ttwu_local
> > 5396 Â 10% +29.9% 7007 Â 14% sched_debug.cpu#16.sched_goidle
> > 1959 Â 12% +36.9% 2683 Â 15% numa-vmstat.node2.nr_slab_reclaimable
> > 7839 Â 12% +37.0% 10736 Â 15% numa-meminfo.node2.SReclaimable
> > 5 Â 15% +66.7% 8 Â 9% sched_debug.cfs_rq[33]:/.load
> > 5 Â 25% +47.8% 8 Â 10% sched_debug.cfs_rq[37]:/.load
> > 2 Â 0% +87.5% 3 Â 34% sched_debug.cpu#89.cpu_load[4]
> > 5 Â 15% +66.7% 8 Â 9% sched_debug.cpu#33.load
> > 6 Â 23% +41.7% 8 Â 10% sched_debug.cpu#37.load
> > 8 Â 10% -26.5% 6 Â 6% sched_debug.cpu#51.cpu_load[1]
> > 7300 Â 37% +63.6% 11943 Â 16% softirqs.TASKLET
> > 2984 Â 6% +43.1% 4271 Â 23% sched_debug.cpu#20.ttwu_count
> > 328 Â 4% +40.5% 462 Â 25% sched_debug.cpu#26.ttwu_local
> > 10 Â 7% -27.5% 7 Â 5% sched_debug.cpu#43.cpu_load[3]
> > 9 Â 8% -30.8% 6 Â 6% sched_debug.cpu#41.cpu_load[3]
> > 9 Â 8% -27.0% 6 Â 6% sched_debug.cpu#41.cpu_load[4]
> > 10 Â 14% -32.5% 6 Â 6% sched_debug.cpu#41.cpu_load[2]
> > 16292 Â 6% +42.8% 23260 Â 25% sched_debug.cpu#13.nr_switches
> > 14 Â 28% +55.9% 23 Â 8% sched_debug.cpu#99.cpu_load[0]
> > 5 Â 8% +28.6% 6 Â 12% sched_debug.cpu#17.load
> > 13 Â 7% -23.1% 10 Â 12% sched_debug.cpu#39.cpu_load[3]
> > 7 Â 10% -35.7% 4 Â 11% sched_debug.cfs_rq[45]:/.runnable_load_avg
> > 5076 Â 13% -21.8% 3970 Â 11% numa-vmstat.node0.nr_slab_unreclaimable
> > 20306 Â 13% -21.8% 15886 Â 11% numa-meminfo.node0.SUnreclaim
> > 10 Â 10% -28.6% 7 Â 6% sched_debug.cpu#45.cpu_load[3]
> > 11 Â 11% -29.5% 7 Â 14% sched_debug.cpu#45.cpu_load[1]
> > 10 Â 12% -26.8% 7 Â 6% sched_debug.cpu#44.cpu_load[1]
> > 10 Â 10% -28.6% 7 Â 6% sched_debug.cpu#44.cpu_load[0]
> > 7 Â 17% +48.3% 10 Â 7% sched_debug.cfs_rq[11]:/.runnable_load_avg
> > 11 Â 12% -34.1% 7 Â 11% sched_debug.cpu#47.cpu_load[0]
> > 10 Â 10% -27.9% 7 Â 5% sched_debug.cpu#47.cpu_load[1]
> > 10 Â 8% -26.8% 7 Â 11% sched_debug.cpu#47.cpu_load[2]
> > 10 Â 8% -28.6% 7 Â 14% sched_debug.cpu#43.cpu_load[0]
> > 10 Â 10% -27.9% 7 Â 10% sched_debug.cpu#43.cpu_load[1]
> > 10 Â 10% -28.6% 7 Â 6% sched_debug.cpu#43.cpu_load[2]
> > 12940 Â 3% +49.8% 19387 Â 35% numa-meminfo.node2.Active(anon)
> > 3235 Â 2% +49.8% 4844 Â 35% numa-vmstat.node2.nr_active_anon
> > 17 Â 17% +36.6% 24 Â 9% sched_debug.cpu#97.cpu_load[2]
> > 14725 Â 8% +21.8% 17928 Â 11% sched_debug.cpu#16.nr_switches
> > 667 Â 10% +45.3% 969 Â 22% sched_debug.cpu#17.ttwu_local
> > 3257 Â 5% +22.4% 3988 Â 11% sched_debug.cpu#118.curr->pid
> > 3144 Â 15% -20.7% 2493 Â 8% sched_debug.cpu#95.curr->pid
> > 2192 Â 11% +50.9% 3308 Â 37% sched_debug.cpu#18.ttwu_count
> > 6 Â 11% +37.5% 8 Â 19% sched_debug.cfs_rq[22]:/.load
> > 12 Â 5% +27.1% 15 Â 8% sched_debug.cpu#5.cpu_load[1]
> > 11 Â 12% -23.4% 9 Â 13% sched_debug.cpu#37.cpu_load[3]
> > 6 Â 11% +37.5% 8 Â 19% sched_debug.cpu#22.load
> > 8 Â 8% -25.0% 6 Â 0% sched_debug.cpu#51.cpu_load[2]
> > 7 Â 6% -20.0% 6 Â 11% sched_debug.cpu#55.cpu_load[3]
> > 11 Â 9% -17.4% 9 Â 9% sched_debug.cpu#39.cpu_load[4]
> > 12 Â 5% -22.9% 9 Â 11% sched_debug.cpu#38.cpu_load[3]
> > 420 Â 13% +43.0% 601 Â 9% sched_debug.cpu#30.ttwu_local
> > 1682 Â 14% +38.5% 2329 Â 17% numa-meminfo.node7.AnonPages
> > 423 Â 13% +37.0% 579 Â 16% numa-vmstat.node7.nr_anon_pages
> > 15 Â 13% +41.9% 22 Â 5% sched_debug.cpu#99.cpu_load[1]
> > 6 Â 20% +44.0% 9 Â 13% sched_debug.cfs_rq[19]:/.runnable_load_avg
> > 9 Â 4% -24.3% 7 Â 0% sched_debug.cpu#43.cpu_load[4]
> > 6341 Â 7% -19.6% 5100 Â 16% sched_debug.cpu#43.curr->pid
> > 2577 Â 11% -11.9% 2270 Â 10% sched_debug.cpu#33.ttwu_count
> > 13 Â 6% -18.5% 11 Â 12% sched_debug.cpu#40.cpu_load[2]
> > 4828 Â 6% +23.8% 5979 Â 6% sched_debug.cpu#34.curr->pid
> > 4351 Â 12% +33.9% 5824 Â 12% sched_debug.cpu#36.curr->pid
> > 10 Â 8% -23.8% 8 Â 8% sched_debug.cpu#37.cpu_load[4]
> > 10 Â 14% -28.6% 7 Â 6% sched_debug.cpu#45.cpu_load[2]
> > 17 Â 22% +40.6% 24 Â 7% sched_debug.cpu#97.cpu_load[1]
> > 11 Â 9% +21.3% 14 Â 5% sched_debug.cpu#7.cpu_load[2]
> > 10 Â 8% -26.2% 7 Â 10% sched_debug.cpu#36.cpu_load[4]
> > 12853 Â 2% +20.0% 15429 Â 11% numa-meminfo.node2.AnonPages
> > 4744 Â 8% +30.8% 6204 Â 11% sched_debug.cpu#35.curr->pid
> > 3214 Â 2% +20.0% 3856 Â 11% numa-vmstat.node2.nr_anon_pages
> > 6181 Â 6% +24.9% 7718 Â 9% sched_debug.cpu#13.curr->pid
> > 6675 Â 23% +27.5% 8510 Â 10% sched_debug.cfs_rq[91]:/.tg_load_avg
> > 171261 Â 5% -22.2% 133177 Â 15% numa-numastat.node0.local_node
> > 6589 Â 21% +29.3% 8522 Â 11% sched_debug.cfs_rq[89]:/.tg_load_avg
> > 6508 Â 20% +28.0% 8331 Â 8% sched_debug.cfs_rq[88]:/.tg_load_avg
> > 6598 Â 22% +29.2% 8525 Â 11% sched_debug.cfs_rq[90]:/.tg_load_avg
> > 590 Â 13% -21.4% 464 Â 7% sched_debug.cpu#105.ttwu_local
> > 175392 Â 5% -21.7% 137308 Â 14% numa-numastat.node0.numa_hit
> > 11 Â 6% -18.2% 9 Â 7% sched_debug.cpu#38.cpu_load[4]
> > 6643 Â 23% +27.4% 8465 Â 10% sched_debug.cfs_rq[94]:/.tg_load_avg
> > 6764 Â 7% +13.8% 7695 Â 7% sched_debug.cpu#12.curr->pid
> > 29 Â 28% +34.5% 39 Â 5% sched_debug.cfs_rq[98]:/.tg_load_contrib
> > 1776 Â 7% +29.4% 2298 Â 13% sched_debug.cpu#11.ttwu_local
> > 13 Â 0% -19.2% 10 Â 8% sched_debug.cpu#40.cpu_load[3]
> > 7 Â 5% -17.2% 6 Â 0% sched_debug.cpu#51.cpu_load[3]
> > 7371 Â 20% -18.0% 6045 Â 3% sched_debug.cpu#1.sched_goidle
> > 26560 Â 2% +14.0% 30287 Â 7% numa-meminfo.node2.Slab
> > 16161 Â 6% -9.4% 14646 Â 1% sched_debug.cfs_rq[27]:/.avg->runnable_avg_sum
> > 351 Â 6% -9.3% 318 Â 1% sched_debug.cfs_rq[27]:/.tg_runnable_contrib
> > 7753 Â 27% -22.9% 5976 Â 5% sched_debug.cpu#2.sched_goidle
> > 3828 Â 9% +17.3% 4490 Â 6% sched_debug.cpu#23.sched_goidle
> > 23925 Â 2% +23.0% 29419 Â 23% numa-meminfo.node2.Active
> > 47 Â 6% -15.8% 40 Â 19% sched_debug.cpu#42.cpu_load[1]
> > 282 Â 5% -9.7% 254 Â 7% sched_debug.cfs_rq[109]:/.tg_runnable_contrib
> > 349 Â 5% -9.3% 317 Â 1% sched_debug.cfs_rq[26]:/.tg_runnable_contrib
> > 6941 Â 3% +8.9% 7558 Â 7% sched_debug.cpu#61.nr_switches
> > 16051 Â 5% -8.9% 14618 Â 1% sched_debug.cfs_rq[26]:/.avg->runnable_avg_sum
> > 238944 Â 3% +9.2% 260958 Â 5% numa-vmstat.node2.numa_local
> > 12966 Â 5% -9.5% 11732 Â 6% sched_debug.cfs_rq[109]:/.avg->runnable_avg_sum
> > 1004 Â 3% +8.2% 1086 Â 4% sched_debug.cpu#118.sched_goidle
> > 20746 Â 4% -8.4% 19000 Â 1% sched_debug.cfs_rq[45]:/.avg->runnable_avg_sum
> > 451 Â 4% -8.3% 413 Â 1% sched_debug.cfs_rq[45]:/.tg_runnable_contrib
> > 3538 Â 4% +17.2% 4147 Â 8% sched_debug.cpu#26.ttwu_count
> > 16 Â 9% +13.8% 18 Â 2% sched_debug.cpu#99.cpu_load[3]
> > 1531 Â 0% +11.3% 1704 Â 1% numa-meminfo.node7.KernelStack
> > 3569 Â 3% +17.2% 4182 Â 10% sched_debug.cpu#24.sched_goidle
> > 1820 Â 3% -12.5% 1594 Â 8% slabinfo.taskstats.num_objs
> > 1819 Â 3% -12.4% 1594 Â 8% slabinfo.taskstats.active_objs
> > 4006 Â 5% +19.1% 4769 Â 8% sched_debug.cpu#17.sched_goidle
> > 21412 Â 19% -17.0% 17779 Â 3% sched_debug.cpu#2.nr_switches
> > 16 Â 9% +24.2% 20 Â 4% sched_debug.cpu#99.cpu_load[2]
> > 10493 Â 7% +13.3% 11890 Â 4% sched_debug.cpu#23.nr_switches
> > 1207 Â 2% -46.9% 640 Â 4% time.voluntary_context_switches
> >
> > time.voluntary_context_switches
> >
> > 1300 ++-----------*--*--------------------*-------------------------------+
> > *..*.*..*.. + *.*..*..*.*..*..* .*..*..*. .*..*.*..*.. |
> > 1200 ++ * * *. *.*..*
> > 1100 ++ |
> > | |
> > 1000 ++ |
> > | |
> > 900 ++ |
> > | |
> > 800 ++ |
> > 700 ++ |
> > O O O O O O O O O O O O O |
> > 600 ++ O O O O O O O O O |
> > | O |
> > 500 ++-------------------------------------------------------------------+
> >
> > [*] bisect-good sample
> > [O] bisect-bad sample
> >
> > To reproduce:
> >
> > apt-get install ruby ruby-oj
> > git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
> > cd lkp-tests
> > bin/setup-local job.yaml # the job file attached in this email
> > bin/run-local job.yaml
> >
> > Disclaimer:
> > Results have been estimated based on internal Intel analysis and are provided
> > for informational purposes only. Any difference in system hardware or software
> > design or configuration may affect actual performance.
> >
>
> Regards,
> Kirill


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/