[LKP] [mm] f1820361f83: -53.5% proc-vmstat.pgfault

From: Fengguang Wu
Date: Tue Sep 30 2014 - 04:06:14 EST


Hi Kirill,

FYI, we noticed 53.5% reduction of page faults and more mmap pages on

commit f1820361f83d556a7f0a9f629100f3825e594328 ("mm: implement ->map_pages for page cache")

testbox/testcase/testparams: lituya/will-it-scale/brk1

8c6e50b0290c4c7 f1820361f83d556a7f0a9f629
--------------- -------------------------
1111649 Â 0% -53.5% 517160 Â 0% proc-vmstat.pgfault
42 Â15% -39.6% 25 Â22% sched_debug.cpu#0.load
21 Â23% +45.3% 30 Â20% sched_debug.cfs_rq[3]:/.runnable_load_avg
1907 Â 0% +56.4% 2983 Â 0% proc-vmstat.nr_mapped
7631 Â 0% +56.4% 11938 Â 0% meminfo.Mapped
5002 Â 0% +42.3% 7119 Â 0% time.maximum_resident_set_size
38824 Â17% +40.3% 54478 Â19% sched_debug.cpu#4.nr_load_updates
572834 Â18% +32.3% 757713 Â10% sched_debug.cpu#4.sched_count
563679 Â18% +32.3% 745714 Â11% sched_debug.cpu#4.nr_switches
280478 Â18% +32.3% 371078 Â11% sched_debug.cpu#4.sched_goidle
67648 Â 0% +25.6% 84935 Â 0% meminfo.Active(file)
16911 Â 0% +25.6% 21233 Â 0% proc-vmstat.nr_active_file
21 Â 9% +27.5% 27 Â17% sched_debug.cfs_rq[2]:/.runnable_load_avg
331375 Â18% +31.0% 434139 Â 9% sched_debug.cpu#4.ttwu_count
104185 Â 0% +16.6% 121516 Â 0% meminfo.Active
2716 Â 0% +15.5% 3138 Â 0% proc-vmstat.pgactivate
22227 Â 0% -40.5% 13227 Â 0% time.minor_page_faults
14852144 Â 3% +7.5% 15967380 Â 4% time.voluntary_context_switches
103047 Â 3% +7.1% 110336 Â 4% vmstat.system.cs


time.minor_page_faults

3000 ++------------------------------------------------------------------+
2800 *+*..*.*..*.*..*.*.*..*.*..*.*..*.*.*..*.*..*.*..*.*.*..*.*..*.*..*.*
| |
2600 ++ |
2400 ++ |
| |
2200 ++ |
2000 ++ |
1800 ++ |
| |
1600 ++ |
1400 ++ |
| |
1200 O+O O O O O O O O O O O O O O O O O O O O O O O O O O O |
1000 ++------------------------------------------------------------------+

[*] bisect-good sample
[O] bisect-bad sample

To reproduce:

apt-get install ruby ruby-oj
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/setup-local job.yaml # the job file attached in this email
bin/run-local job.yaml

Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.

Thanks,
Fengguang
---
testcase: will-it-scale
default_monitors:
watch-oom:
wait: pre-test
uptime:
iostat:
vmstat:
numa-numastat:
numa-vmstat:
numa-meminfo:
proc-vmstat:
proc-stat:
meminfo:
slabinfo:
interrupts:
lock_stat:
latency_stats:
softirqs:
bdi_dev_mapping:
diskstats:
energy:
cpuidle:
cpufreq:
turbostat:
sched_debug:
interval: 10
pmeter:
model: Grantley Haswell
nr_cpu: 16
memory: 16G
hdd_partitions:
swap_partitions:
rootfs_partition:
perf-profile:
freq: 800
will-it-scale:
test:
- brk1
branch: linus/master
commit: 19583ca584d6f574384e17fe7613dfaeadcdc4a6
repeat_to: 3
enqueue_time: 2014-09-25 21:45:43.051375027 +08:00
testbox: lituya
kconfig: x86_64-rhel
kernel: "/kernel/x86_64-rhel/19583ca584d6f574384e17fe7613dfaeadcdc4a6/vmlinuz-3.16.0"
user: lkp
queue: wfg
result_root: "/result/lituya/will-it-scale/brk1/debian-x86_64.cgz/x86_64-rhel/19583ca584d6f574384e17fe7613dfaeadcdc4a6/0"
job_file: "/lkp/scheduled/lituya/wfg_will-it-scale-brk1-x86_64-rhel-19583ca584d6f574384e17fe7613dfaeadcdc4a6-2.yaml"
dequeue_time: 2014-09-25 23:19:40.512443629 +08:00
history_time: 300
job_state: finished
loadavg: 14.25 6.83 2.71 1/443 6625
start_time: '1411658412'
end_time: '1411658717'
version: "/lkp/lkp/.src-20140925-212910"
./runtest.py brk1 32 1 8 12 16
_______________________________________________
LKP mailing list
LKP@xxxxxxxxxxxxxxx