[LKP] [mm] 1306a85aed3: +5.8% will-it-scale.per_thread_ops

From: Huang Ying
Date: Wed Dec 17 2014 - 03:18:33 EST


FYI, we noticed the below changes on

commit 1306a85aed3ec3db98945aafb7dfbe5648a1203c ("mm: embed the memcg pointer directly into struct page")


testbox/testcase/testparams: lkp-snb01/will-it-scale/performance-page_fault2

22811c6bc3c764d8 1306a85aed3ec3db98945aafb7
---------------- --------------------------
%stddev %change %stddev
\ | \
185591 Â 0% +5.8% 196339 Â 0% will-it-scale.per_thread_ops
268066 Â 0% +4.2% 279258 Â 0% will-it-scale.per_process_ops
66204 Â 47% -79.9% 13282 Â 6% sched_debug.cpu#14.sched_count
726 Â 12% -100.0% 0 Â 0% slabinfo.blkdev_requests.num_objs
726 Â 12% -100.0% 0 Â 0% slabinfo.blkdev_requests.active_objs
282 Â 11% -86.2% 39 Â 0% slabinfo.bdev_cache.num_objs
282 Â 11% -86.2% 39 Â 0% slabinfo.bdev_cache.active_objs
536 Â 10% -92.7% 39 Â 0% slabinfo.blkdev_ioc.num_objs
536 Â 10% -92.7% 39 Â 0% slabinfo.blkdev_ioc.active_objs
745 Â 13% -93.0% 52 Â 34% slabinfo.xfs_buf.num_objs
1.35 Â 2% -97.0% 0.04 Â 17% perf-profile.cpu-cycles.mem_cgroup_page_lruvec.release_pages.free_pages_and_swap_cache.tlb_flush_mmu_free.unmap_page_range
70832 Â 7% -84.6% 10928 Â 0% meminfo.DirectMap4k
745 Â 13% -93.0% 52 Â 34% slabinfo.xfs_buf.active_objs
20 Â 34% +173.8% 54 Â 38% sched_debug.cfs_rq[25]:/.runnable_load_avg
21 Â 32% +163.5% 56 Â 37% sched_debug.cfs_rq[25]:/.load
21 Â 32% +163.5% 56 Â 37% sched_debug.cpu#25.load
6.68 Â 2% -69.0% 2.07 Â 4% perf-profile.cpu-cycles.lru_cache_add_active_or_unevictable.do_cow_fault.handle_mm_fault.__do_page_fault.do_page_fault
11481 Â 40% -60.4% 4550 Â 24% sched_debug.cpu#31.sched_count
35880 Â 29% -54.4% 16355 Â 20% sched_debug.cpu#8.sched_count
30 Â 44% +90.8% 57 Â 34% sched_debug.cpu#25.cpu_load[0]
258 Â 42% -58.4% 107 Â 21% sched_debug.cfs_rq[20]:/.blocked_load_avg
615 Â 47% -55.8% 271 Â 18% sched_debug.cpu#22.ttwu_local
24 Â 36% +81.6% 44 Â 26% sched_debug.cpu#25.cpu_load[1]
31132 Â 41% -47.8% 16259 Â 47% sched_debug.cpu#13.sched_count
287 Â 37% -53.0% 135 Â 18% sched_debug.cfs_rq[20]:/.tg_load_contrib
2755 Â 22% +79.7% 4950 Â 36% sched_debug.cpu#8.ttwu_local
9 Â 22% +69.2% 16 Â 31% sched_debug.cpu#14.cpu_load[0]
8626 Â 14% -46.4% 4621 Â 32% sched_debug.cpu#0.ttwu_local
37 Â 44% -43.6% 21 Â 22% sched_debug.cpu#31.cpu_load[1]
390 Â 13% -45.3% 213 Â 16% sched_debug.cfs_rq[25]:/.blocked_load_avg
14 Â 24% -40.4% 8 Â 25% sched_debug.cpu#13.cpu_load[0]
309688 Â 24% -44.8% 170966 Â 34% sched_debug.cfs_rq[18]:/.spread0
410 Â 13% -34.6% 268 Â 7% sched_debug.cfs_rq[25]:/.tg_load_contrib
20 Â 30% +64.6% 33 Â 17% sched_debug.cpu#25.cpu_load[2]
370117 Â 6% -43.0% 210857 Â 45% sched_debug.cfs_rq[17]:/.spread0
28 Â 29% -34.2% 18 Â 10% sched_debug.cpu#31.cpu_load[2]
16558 Â 28% -40.9% 9784 Â 11% sched_debug.cfs_rq[8]:/.exec_clock
8517 Â 15% -32.9% 5715 Â 9% sched_debug.cpu#20.sched_count
2301 Â 29% +68.2% 3871 Â 17% sched_debug.cpu#29.ttwu_count
13 Â 17% -35.8% 8 Â 26% sched_debug.cfs_rq[13]:/.runnable_load_avg
2317 Â 6% -26.5% 1703 Â 18% sched_debug.cpu#13.curr->pid
2470 Â 12% -23.3% 1893 Â 12% sched_debug.cpu#15.curr->pid
12 Â 14% -28.0% 9 Â 7% sched_debug.cpu#13.cpu_load[3]
330696 Â 22% -35.6% 212829 Â 5% sched_debug.cfs_rq[8]:/.min_vruntime
42 Â 38% -43.8% 23 Â 15% sched_debug.cpu#24.cpu_load[0]
2556 Â 6% +42.8% 3649 Â 9% sched_debug.cpu#25.curr->pid
33 Â 33% -34.6% 21 Â 3% sched_debug.cfs_rq[5]:/.load
33 Â 33% -33.1% 22 Â 7% sched_debug.cpu#5.load
3595 Â 17% -25.0% 2697 Â 5% sched_debug.cpu#17.ttwu_count
24718 Â 15% -27.3% 17972 Â 13% sched_debug.cpu#0.nr_switches
18 Â 25% +45.2% 26 Â 10% sched_debug.cpu#25.cpu_load[3]
7788 Â 16% -24.8% 5857 Â 5% sched_debug.cpu#17.nr_switches
17 Â 12% +31.4% 23 Â 17% sched_debug.cpu#1.cpu_load[3]
18 Â 10% +33.3% 24 Â 16% sched_debug.cpu#1.cpu_load[2]
6091 Â 5% -26.8% 4460 Â 25% sched_debug.cpu#31.nr_switches
3956 Â 15% -28.8% 2816 Â 16% sched_debug.cpu#31.ttwu_count
4.82 Â 1% -24.3% 3.65 Â 1% perf-profile.cpu-cycles.release_pages.free_pages_and_swap_cache.tlb_flush_mmu_free.unmap_page_range.unmap_single_vma
13 Â 9% -26.9% 9 Â 11% sched_debug.cpu#13.cpu_load[2]
3327 Â 11% -20.2% 2655 Â 11% sched_debug.cpu#4.curr->pid
4.91 Â 1% -23.8% 3.74 Â 1% perf-profile.cpu-cycles.tlb_flush_mmu_free.unmap_page_range.unmap_single_vma.unmap_vmas.unmap_region
4.91 Â 1% -23.7% 3.74 Â 1% perf-profile.cpu-cycles.free_pages_and_swap_cache.tlb_flush_mmu_free.unmap_page_range.unmap_single_vma.unmap_vmas
36 Â 8% -22.9% 27 Â 7% sched_debug.cpu#17.cpu_load[0]
1.74 Â 2% -22.8% 1.34 Â 2% perf-profile.cpu-cycles.unlock_page.do_cow_fault.handle_mm_fault.__do_page_fault.do_page_fault
17 Â 21% +33.8% 22 Â 7% sched_debug.cpu#25.cpu_load[4]
347045 Â 0% -20.8% 274703 Â 0% meminfo.Inactive(file)
86761 Â 0% -20.8% 68676 Â 0% proc-vmstat.nr_inactive_file
42941 Â 0% -20.7% 34065 Â 1% numa-vmstat.node0.nr_inactive_file
171765 Â 0% -20.7% 136260 Â 1% numa-meminfo.node0.Inactive(file)
175280 Â 0% -21.0% 138443 Â 1% numa-meminfo.node1.Inactive(file)
43819 Â 0% -21.0% 34611 Â 1% numa-vmstat.node1.nr_inactive_file
14245 Â 13% -28.8% 10144 Â 18% sched_debug.cpu#0.ttwu_count
34770 Â 14% +29.3% 44960 Â 18% sched_debug.cfs_rq[1]:/.exec_clock
1.23 Â 1% +23.8% 1.52 Â 2% perf-profile.cpu-cycles._raw_spin_lock.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma.do_cow_fault
17 Â 21% +23.5% 21 Â 7% sched_debug.cpu#29.cpu_load[3]
32 Â 5% -12.2% 28 Â 8% sched_debug.cpu#21.cpu_load[1]
35 Â 9% -19.1% 28 Â 8% sched_debug.cpu#17.cpu_load[1]
10608 Â 2% -17.2% 8783 Â 4% numa-vmstat.node0.nr_active_file
42435 Â 2% -17.2% 35136 Â 4% numa-meminfo.node0.Active(file)
63836 Â 0% -16.9% 53045 Â 0% numa-vmstat.node1.numa_interleave
53212 Â 0% -16.3% 44533 Â 0% numa-vmstat.node0.numa_interleave
84165 Â 0% -16.2% 70563 Â 0% meminfo.Active(file)
21040 Â 0% -16.2% 17640 Â 0% proc-vmstat.nr_active_file
6709 Â 0% +18.4% 7944 Â 3% sched_debug.cfs_rq[20]:/.tg_load_avg
6711 Â 0% +18.5% 7950 Â 3% sched_debug.cfs_rq[21]:/.tg_load_avg
35768 Â 9% -15.0% 30418 Â 8% sched_debug.cpu#8.nr_load_updates
6714 Â 0% +18.4% 7946 Â 3% sched_debug.cfs_rq[22]:/.tg_load_avg
6717 Â 0% +18.0% 7924 Â 3% sched_debug.cfs_rq[18]:/.tg_load_avg
6712 Â 0% +17.9% 7910 Â 3% sched_debug.cfs_rq[19]:/.tg_load_avg
6688 Â 1% +17.9% 7883 Â 2% sched_debug.cfs_rq[23]:/.tg_load_avg
33 Â 5% -16.5% 27 Â 2% sched_debug.cpu#21.cpu_load[0]
6893 Â 0% +17.4% 8092 Â 3% sched_debug.cfs_rq[7]:/.tg_load_avg
6988 Â 1% +15.6% 8078 Â 4% sched_debug.cfs_rq[0]:/.tg_load_avg
6577 Â 1% +18.0% 7760 Â 3% sched_debug.cfs_rq[30]:/.tg_load_avg
6982 Â 1% +16.1% 8105 Â 3% sched_debug.cfs_rq[3]:/.tg_load_avg
6875 Â 0% +17.6% 8085 Â 3% sched_debug.cfs_rq[8]:/.tg_load_avg
6579 Â 1% +17.8% 7748 Â 3% sched_debug.cfs_rq[29]:/.tg_load_avg
7016 Â 1% +15.2% 8083 Â 4% sched_debug.cfs_rq[1]:/.tg_load_avg
6873 Â 0% +17.0% 8042 Â 3% sched_debug.cfs_rq[9]:/.tg_load_avg
7005 Â 1% +15.4% 8084 Â 3% sched_debug.cfs_rq[2]:/.tg_load_avg
34 Â 5% -13.9% 29 Â 6% sched_debug.cpu#20.cpu_load[0]
6737 Â 1% +17.6% 7922 Â 3% sched_debug.cfs_rq[17]:/.tg_load_avg
6742 Â 1% +17.4% 7912 Â 3% sched_debug.cfs_rq[16]:/.tg_load_avg
6575 Â 1% +17.4% 7720 Â 3% sched_debug.cfs_rq[31]:/.tg_load_avg
8.09 Â 1% -13.8% 6.97 Â 0% perf-profile.cpu-cycles.munmap
8.08 Â 1% -13.7% 6.97 Â 0% perf-profile.cpu-cycles.system_call_fastpath.munmap
27 Â 6% -9.0% 25 Â 4% sched_debug.cfs_rq[23]:/.runnable_load_avg
8.07 Â 1% -13.8% 6.96 Â 0% perf-profile.cpu-cycles.do_munmap.vm_munmap.sys_munmap.system_call_fastpath.munmap
8.07 Â 1% -13.8% 6.95 Â 0% perf-profile.cpu-cycles.unmap_region.do_munmap.vm_munmap.sys_munmap.system_call_fastpath
8.08 Â 1% -13.8% 6.97 Â 0% perf-profile.cpu-cycles.vm_munmap.sys_munmap.system_call_fastpath.munmap
8.08 Â 1% -13.8% 6.97 Â 0% perf-profile.cpu-cycles.sys_munmap.system_call_fastpath.munmap
6939 Â 1% +16.4% 8080 Â 3% sched_debug.cfs_rq[6]:/.tg_load_avg
6710 Â 1% +16.4% 7812 Â 3% sched_debug.cfs_rq[24]:/.tg_load_avg
6653 Â 1% +17.0% 7783 Â 3% sched_debug.cfs_rq[26]:/.tg_load_avg
622401 Â 4% +15.2% 717037 Â 11% sched_debug.cfs_rq[1]:/.min_vruntime
1504 Â 1% -13.6% 1300 Â 7% slabinfo.sock_inode_cache.active_objs
30 Â 8% -15.4% 26 Â 5% sched_debug.cpu#23.load
1504 Â 1% -13.6% 1300 Â 7% slabinfo.sock_inode_cache.num_objs
30 Â 8% -15.4% 26 Â 5% sched_debug.cfs_rq[23]:/.load
7.46 Â 0% -13.3% 6.47 Â 0% perf-profile.cpu-cycles.unmap_vmas.unmap_region.do_munmap.vm_munmap.sys_munmap
7.46 Â 0% -13.3% 6.47 Â 0% perf-profile.cpu-cycles.unmap_single_vma.unmap_vmas.unmap_region.do_munmap.vm_munmap
5.11 Â 1% +15.5% 5.90 Â 0% perf-profile.cpu-cycles.__list_del_entry.list_del.__rmqueue.get_page_from_freelist.__alloc_pages_nodemask
6887 Â 0% +16.0% 7986 Â 3% sched_debug.cfs_rq[10]:/.tg_load_avg
6645 Â 2% +17.1% 7783 Â 3% sched_debug.cfs_rq[25]:/.tg_load_avg
7.40 Â 0% -13.4% 6.41 Â 0% perf-profile.cpu-cycles.unmap_page_range.unmap_single_vma.unmap_vmas.unmap_region.do_munmap
5.16 Â 1% +15.7% 5.96 Â 0% perf-profile.cpu-cycles.list_del.__rmqueue.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma
687523 Â 2% +11.9% 769461 Â 4% sched_debug.cfs_rq[0]:/.min_vruntime
6834 Â 0% +16.8% 7979 Â 3% sched_debug.cfs_rq[14]:/.tg_load_avg
6885 Â 0% +16.1% 7996 Â 3% sched_debug.cfs_rq[12]:/.tg_load_avg
6894 Â 0% +16.1% 8005 Â 3% sched_debug.cfs_rq[11]:/.tg_load_avg
6803 Â 1% +16.2% 7901 Â 3% sched_debug.cfs_rq[15]:/.tg_load_avg
6963 Â 1% +16.1% 8087 Â 3% sched_debug.cfs_rq[5]:/.tg_load_avg
6841 Â 0% +16.8% 7991 Â 3% sched_debug.cfs_rq[13]:/.tg_load_avg
5.64 Â 1% +14.8% 6.48 Â 0% perf-profile.cpu-cycles.__rmqueue.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma.do_cow_fault
403 Â 7% +13.6% 458 Â 6% sched_debug.cfs_rq[1]:/.tg_runnable_contrib
6967 Â 1% +15.9% 8078 Â 3% sched_debug.cfs_rq[4]:/.tg_load_avg
18553 Â 7% +13.6% 21084 Â 6% sched_debug.cfs_rq[1]:/.avg->runnable_avg_sum
6645 Â 1% +16.7% 7755 Â 3% sched_debug.cfs_rq[27]:/.tg_load_avg
8.77 Â 0% +14.1% 10.00 Â 0% perf-profile.cpu-cycles.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma.do_cow_fault.handle_mm_fault
37777 Â 12% +20.6% 45541 Â 2% sched_debug.cfs_rq[24]:/.exec_clock
67160 Â 8% -12.5% 58785 Â 8% sched_debug.cfs_rq[18]:/.exec_clock
6641 Â 2% +16.6% 7742 Â 3% sched_debug.cfs_rq[28]:/.tg_load_avg
35 Â 9% -17.0% 29 Â 10% sched_debug.cpu#17.cpu_load[2]
34 Â 9% -13.7% 30 Â 9% sched_debug.cpu#17.cpu_load[3]
9.53 Â 0% +12.7% 10.74 Â 0% perf-profile.cpu-cycles.__alloc_pages_nodemask.alloc_pages_vma.do_cow_fault.handle_mm_fault.__do_page_fault
10.08 Â 0% +12.5% 11.34 Â 0% perf-profile.cpu-cycles.alloc_pages_vma.do_cow_fault.handle_mm_fault.__do_page_fault.do_page_fault
41728 Â 2% -15.1% 35425 Â 4% numa-meminfo.node1.Active(file)
10431 Â 2% -15.1% 8856 Â 4% numa-vmstat.node1.nr_active_file
19883 Â 0% -10.0% 17893 Â 1% slabinfo.radix_tree_node.num_objs
7.52 Â 1% +11.3% 8.37 Â 1% perf-profile.cpu-cycles._raw_spin_lock.do_cow_fault.handle_mm_fault.__do_page_fault.do_page_fault
14873 Â 5% -11.0% 13243 Â 6% sched_debug.cpu#14.nr_switches
56 Â 3% -7.1% 52 Â 6% sched_debug.cpu#16.cpu_load[2]
19817 Â 0% -9.9% 17856 Â 0% slabinfo.radix_tree_node.active_objs
49459 Â 10% +14.7% 56743 Â 2% sched_debug.cpu#25.nr_load_updates
741856 Â 10% +16.5% 864387 Â 2% sched_debug.cfs_rq[24]:/.min_vruntime
31.79 Â 0% -9.3% 28.84 Â 0% perf-profile.cpu-cycles.do_cow_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
47.90 Â 1% +16.9% 55.99 Â 2% time.user_time
238256 Â 0% +8.4% 258184 Â 0% time.voluntary_context_switches
2.015e+08 Â 0% +8.4% 2.184e+08 Â 0% time.minor_page_faults
476 Â 0% +5.9% 504 Â 0% time.percent_of_cpu_this_job_got
1441 Â 0% +5.5% 1520 Â 0% time.system_time
40.26 Â 0% +2.0% 41.04 Â 0% turbostat.%c0

lkp-snb01: Sandy Bridge-EP
Memory: 32G




time.minor_page_faults

2.5e+08 ++----------------------------------------------------------------+
| |
O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O
2e+08 *+*.*.*.*.*.*.*..*.*.*.*.*.*.*.*.*.*.*.*.*.*.*.*..*.*.*.*.*.*.*.* |
| |
| |
1.5e+08 ++ |
| |
1e+08 ++ |
| |
| |
5e+07 ++ |
| |
| |
0 ++----------O-----------------------------------------------------+

[*] bisect-good sample
[O] bisect-bad sample

To reproduce:

apt-get install ruby ruby-oj
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/setup-local job.yaml # the job file attached in this email
bin/run-local job.yaml


Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


Thanks,
Huang, Ying

---
testcase: will-it-scale
default_monitors:
wait: pre-test
uptime:
iostat:
vmstat:
numa-numastat:
numa-vmstat:
numa-meminfo:
proc-vmstat:
proc-stat:
meminfo:
slabinfo:
interrupts:
lock_stat:
latency_stats:
softirqs:
bdi_dev_mapping:
diskstats:
cpuidle:
cpufreq:
turbostat:
sched_debug:
interval: 10
pmeter:
default_watchdogs:
watch-oom:
watchdog:
cpufreq_governor:
- performance
commit: 0d8e01273cef9529f77af199b1b7de51a7a294c5
model: Sandy Bridge-EP
memory: 32G
hdd_partitions: "/dev/sda2"
swap_partitions:
perf-profile:
freq: 800
will-it-scale:
test:
- page_fault2
testbox: lkp-snb01
tbox_group: lkp-snb01
kconfig: x86_64-rhel
enqueue_time: 2014-12-13 19:19:16.931955151 +08:00
head_commit: 0d8e01273cef9529f77af199b1b7de51a7a294c5
base_commit: b2776bf7149bddd1f4161f14f79520f17fc1d71d
branch: linux-devel/devel-hourly-2014121307
kernel: "/kernel/x86_64-rhel/0d8e01273cef9529f77af199b1b7de51a7a294c5/vmlinuz-3.18.0-g0d8e012"
user: lkp
queue: cyclic
rootfs: debian-x86_64.cgz
result_root: "/result/lkp-snb01/will-it-scale/performance-page_fault2/debian-x86_64.cgz/x86_64-rhel/0d8e01273cef9529f77af199b1b7de51a7a294c5/0"
job_file: "/lkp/scheduled/lkp-snb01/cyclic_will-it-scale-performance-page_fault2-x86_64-rhel-HEAD-0d8e01273cef9529f77af199b1b7de51a7a294c5-0.yaml"
dequeue_time: 2014-12-14 03:40:23.689060918 +08:00
job_state: finished
loadavg: 23.53 12.15 4.88 1/302 10072
start_time: '1418499666'
end_time: '1418499978'
version: "/lkp/lkp/.src-20141213-150527"
echo performance > /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu1/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu10/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu11/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu12/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu13/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu14/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu15/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu16/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu17/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu18/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu19/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu2/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu20/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu21/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu22/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu23/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu24/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu25/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu26/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu27/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu28/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu29/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu3/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu30/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu31/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu4/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu5/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu6/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu7/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu8/cpufreq/scaling_governor
echo performance > /sys/devices/system/cpu/cpu9/cpufreq/scaling_governor
./runtest.py page_fault2 25 1 8 16 24 32
_______________________________________________
LKP mailing list
LKP@xxxxxxxxxxxxxxx