Re: [sched] 143e1e28cb4: +17.9% aim7.jobs-per-min, -9.7% hackbench.throughput

From: Fengguang Wu
Date: Sun Aug 10 2014 - 06:54:29 EST


On Sun, Aug 10, 2014 at 09:59:15AM +0200, Peter Zijlstra wrote:
> On Sun, Aug 10, 2014 at 12:41:27PM +0800, Fengguang Wu wrote:
> > Hi Vincent,
> >
> > FYI, we noticed some performance ups/downs on
> >
> > commit 143e1e28cb40bed836b0a06567208bd7347c9672 ("sched: Rework sched_domain topology definition")
> >
> > 128529 Â 1% +17.9% 151594 Â 0% brickland1/aim7/6000-page_test
> > 76064 Â 3% -32.2% 51572 Â 6% brickland1/aim7/6000-page_test
> > 59366697 Â 3% -46.1% 32017187 Â 7% brickland1/aim7/6000-page_test
> > 2561 Â 7% -42.9% 1463 Â 9% brickland1/aim7/6000-page_test
> > 9926 Â 2% -43.8% 5577 Â 4% brickland1/aim7/6000-page_test
> > 19542 Â 9% -38.3% 12057 Â 4% brickland1/aim7/6000-page_test
> > 993654 Â 2% -19.9% 795962 Â 3% brickland1/aim7/6000-page_test
>
> etc..
>
> how does one read that? afaict its a random number generator..

The "brickland1/aim7/6000-page_test" is the test case part.

The "TOTAL XXX" is the metric part. One test run may generate lots of
metrics, reflecting different aspect of the system dynamics.

This view may be easier to read, by grouping the metrics by test case.

test case: brickland1/aim7/6000-page_test

128529 Â 1% +17.9% 151594 Â 0% TOTAL aim7.jobs-per-min
582269 Â14% -55.6% 258617 Â16% TOTAL softirqs.SCHED
59366697 Â 3% -46.1% 32017187 Â 7% TOTAL cpuidle.C1-IVT.time
54543 Â11% -37.2% 34252 Â16% TOTAL cpuidle.C1-IVT.usage
2561 Â 7% -42.9% 1463 Â 9% TOTAL numa-numastat.node2.other_node
9926 Â 2% -43.8% 5577 Â 4% TOTAL proc-vmstat.numa_other
2627 Â12% -49.1% 1337 Â12% TOTAL numa-numastat.node1.other_node
19542 Â 9% -38.3% 12057 Â 4% TOTAL cpuidle.C1E-IVT.usage
2455 Â10% -41.0% 1448 Â 9% TOTAL numa-numastat.node0.other_node
471304 Â11% -31.4% 323251 Â 8% TOTAL numa-vmstat.node1.nr_anon_pages
2281 Â12% -41.8% 1327 Â16% TOTAL numa-numastat.node3.other_node
1903446 Â11% -30.7% 1318156 Â 7% TOTAL numa-meminfo.node1.AnonPages
518274 Â11% -30.4% 360742 Â 8% TOTAL numa-vmstat.node1.nr_active_anon
2097138 Â10% -30.0% 1469003 Â 8% TOTAL numa-meminfo.node1.Active(anon)
49527464 Â 6% -32.4% 33488833 Â 4% TOTAL cpuidle.C1E-IVT.time
2118206 Â10% -29.7% 1488874 Â 7% TOTAL numa-meminfo.node1.Active
76064 Â 3% -32.2% 51572 Â 6% TOTAL cpuidle.C6-IVT.usage
188938 Â33% -41.3% 110966 Â16% TOTAL numa-meminfo.node2.PageTables
47262 Â35% -42.3% 27273 Â16% TOTAL numa-vmstat.node2.nr_page_table_pages
1944687 Â10% -25.8% 1443923 Â16% TOTAL numa-meminfo.node3.Active(anon)
1754763 Â11% -26.6% 1288713 Â16% TOTAL numa-meminfo.node3.AnonPages
1964722 Â10% -25.5% 1464696 Â16% TOTAL numa-meminfo.node3.Active
432109 Â 9% -26.2% 318886 Â14% TOTAL numa-vmstat.node3.nr_anon_pages
479527 Â 9% -25.3% 358029 Â14% TOTAL numa-vmstat.node3.nr_active_anon
463719 Â 8% -24.7% 349388 Â 7% TOTAL numa-vmstat.node0.nr_anon_pages
3157742 Â16% -26.5% 2320253 Â10% TOTAL numa-meminfo.node1.MemUsed
7303589 Â 2% -24.8% 5495829 Â 3% TOTAL meminfo.AnonPages
8064024 Â 2% -24.0% 6132677 Â 3% TOTAL meminfo.Active(anon)
511455 Â 8% -23.9% 389447 Â 7% TOTAL numa-vmstat.node0.nr_active_anon
1818612 Â 2% -24.9% 1365670 Â 3% TOTAL proc-vmstat.nr_anon_pages
2007155 Â 2% -24.3% 1518688 Â 3% TOTAL proc-vmstat.nr_active_anon
8145316 Â 2% -23.7% 6213832 Â 3% TOTAL meminfo.Active
1850230 Â 8% -24.1% 1405061 Â 8% TOTAL numa-meminfo.node0.AnonPages
6.567e+11 Â 3% -21.4% 5.16e+11 Â 4% TOTAL meminfo.Committed_AS
2044097 Â 7% -23.5% 1562809 Â 8% TOTAL numa-meminfo.node0.Active(anon)
2064106 Â 7% -23.3% 1582792 Â 8% TOTAL numa-meminfo.node0.Active
235358 Â 5% -19.8% 188793 Â 3% TOTAL proc-vmstat.pgmigrate_success
235358 Â 5% -19.8% 188793 Â 3% TOTAL proc-vmstat.numa_pages_migrated
433235 Â 4% -18.1% 354845 Â 5% TOTAL numa-vmstat.node2.nr_anon_pages
198747 Â23% -28.0% 143034 Â 3% TOTAL proc-vmstat.nr_page_table_pages
3187 Â 5% -18.5% 2599 Â 6% TOTAL numa-vmstat.node0.numa_other
796281 Â23% -27.7% 575352 Â 3% TOTAL meminfo.PageTables
1395062 Â 6% -19.0% 1130108 Â 3% TOTAL proc-vmstat.numa_hint_faults
477037 Â 4% -17.2% 394983 Â 5% TOTAL numa-vmstat.node2.nr_active_anon
2829 Â10% +18.7% 3357 Â 3% TOTAL numa-vmstat.node2.nr_alloc_batch
993654 Â 2% -19.9% 795962 Â 3% TOTAL softirqs.RCU
2706 Â 4% +26.1% 3411 Â 5% TOTAL numa-vmstat.node1.nr_alloc_batch
2725835 Â 4% -17.5% 2247537 Â 4% TOTAL numa-meminfo.node2.MemUsed
393637 Â 6% -15.3% 333296 Â 2% TOTAL proc-vmstat.numa_hint_faults_local
2.82 Â 3% +21.9% 3.43 Â 4% TOTAL turbostat.%pc2
4.40 Â 2% +22.0% 5.37 Â 4% TOTAL turbostat.%c6
1742111 Â 4% -16.9% 1447181 Â 5% TOTAL numa-meminfo.node2.AnonPages
15865125 Â 1% -15.0% 13485882 Â 1% TOTAL softirqs.TIMER
1923000 Â 4% -16.4% 1608509 Â 5% TOTAL numa-meminfo.node2.Active(anon)
1943185 Â 4% -16.2% 1629057 Â 5% TOTAL numa-meminfo.node2.Active
3077 Â 1% +14.5% 3523 Â 0% TOTAL proc-vmstat.pgactivate
329 Â 1% -13.3% 285 Â 0% TOTAL uptime.boot
13158 Â13% -14.4% 11261 Â 4% TOTAL numa-meminfo.node3.SReclaimable
3289 Â13% -14.4% 2815 Â 4% TOTAL numa-vmstat.node3.nr_slab_reclaimable
3150464 Â 2% -24.2% 2387551 Â 3% TOTAL time.voluntary_context_switches
281 Â 1% -15.1% 238 Â 0% TOTAL time.elapsed_time
29294 Â 1% -14.3% 25093 Â 0% TOTAL time.system_time
4529818 Â 1% -8.8% 4129398 Â 1% TOTAL time.involuntary_context_switches
15.75 Â 1% -3.4% 15.21 Â 0% TOTAL turbostat.RAM_W
10655 Â 0% +1.4% 10802 Â 0% TOTAL time.percent_of_cpu_this_job_got

Thanks,
Fengguang
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/