[lkp] [writeback] 074e46e1ad: fio.write_bw_MBps -10.5% regression

From: kernel test robot
Date: Wed May 18 2016 - 21:18:32 EST


FYI, we noticed fio.write_bw_MBps -10.5% regression due to commit:

commit 074e46e1aded0ac0474a5db4d50d514d9dc42e78 ("writeback: throttle buffered writeback")
https://git.kernel.org/pub/scm/linux/kernel/git/axboe/linux-block.git wb-buf-throttle

in testcase: fio-basic
on test machine: lkp-bdw-ep2: threads Broadwell-EP with 128G memory
with following conditions: bs=4k/cpufreq_governor=performance/disk=1SSD/fs=xfs/ioengine=sync/nr_task=8/runtime=300/rw=write/test_size=512g



Details are as below:
-------------------------------------------------------------------------------------------------->


=========================================================================================
bs/compiler/cpufreq_governor/disk/fs/ioengine/kconfig/nr_task/rootfs/runtime/rw/tbox_group/test_size/testcase:
4k/gcc-4.9/performance/1SSD/xfs/sync/x86_64-rhel/8/debian-x86_64-2015-02-07.cgz/300/write/lkp-bdw-ep2/512g/fio-basic

commit:
565d33b77b63e5f70357ebf97c4141044854f621
074e46e1aded0ac0474a5db4d50d514d9dc42e78

565d33b77b63e5f7 074e46e1aded0ac0474a5db4d5
---------------- --------------------------
fail:runs %reproduction fail:runs
| | |
1:4 -25% :5 kmsg.ENERGY_PERF_BIAS:Set_to'normal',was'performance'
1:4 -25% :5 kmsg.ENERGY_PERF_BIAS:View_and_update_with_x86_energy_perf_policy(#)
%stddev %change %stddev
\ | \
289.61 ± 0% +3.7% 300.36 ± 0% fio.time.elapsed_time
289.61 ± 0% +3.7% 300.36 ± 0% fio.time.elapsed_time.max
1.074e+09 ± 0% -7.1% 9.971e+08 ± 0% fio.time.file_system_outputs
3446 ± 9% -64.2% 1234 ± 27% fio.time.involuntary_context_switches
105.50 ± 0% -12.8% 92.00 ± 0% fio.time.percent_of_cpu_this_job_got
268.34 ± 0% -10.4% 240.47 ± 1% fio.time.system_time
98347 ± 0% +4.6% 102892 ± 0% fio.time.voluntary_context_switches
1812 ± 0% -10.5% 1622 ± 0% fio.write_bw_MBps
463987 ± 0% -10.5% 415407 ± 0% fio.write_iops
239.56 ± 0% -1.9% 235.12 ± 0% pmeter.Average_Active_Power
1944 ± 0% -8.8% 1773 ± 0% pmeter.performance_per_watt
3446 ± 9% -64.2% 1234 ± 27% time.involuntary_context_switches
105.50 ± 0% -12.8% 92.00 ± 0% time.percent_of_cpu_this_job_got
268.34 ± 0% -10.4% 240.47 ± 1% time.system_time
1774732 ± 0% -10.9% 1581981 ± 0% vmstat.io.bo
8570 ± 1% -48.5% 4417 ± 1% vmstat.system.cs
17991 ± 0% -11.1% 15999 ± 0% vmstat.system.in
31090 ± 2% -8.5% 28447 ± 1% meminfo.Active(anon)
7169 ± 12% -32.5% 4839 ± 0% meminfo.AnonHugePages
28207 ± 3% -8.3% 25865 ± 1% meminfo.AnonPages
969710 ± 0% -12.9% 844795 ± 0% meminfo.Writeback
4437 ± 22% -32.7% 2986 ± 24% numa-vmstat.node0.nr_active_anon
3915 ± 19% -27.6% 2835 ± 18% numa-vmstat.node0.nr_anon_pages
125910 ± 6% -14.5% 107612 ± 7% numa-vmstat.node0.nr_writeback
17293 ± 5% -6.5% 16170 ± 5% numa-vmstat.node1.nr_slab_unreclaimable
120890 ± 5% -14.7% 103101 ± 8% numa-vmstat.node1.nr_writeback
1.837e+08 ± 5% +12.8% 2.072e+08 ± 5% cpuidle.C1-BDW.time
2471919 ± 1% +15.3% 2849758 ± 3% cpuidle.C1-BDW.usage
1.565e+08 ± 11% -13.7% 1.35e+08 ± 10% cpuidle.C1E-BDW.time
2.784e+08 ± 16% -80.3% 54791144 ± 18% cpuidle.C3-BDW.time
380002 ± 5% -24.9% 285251 ± 10% cpuidle.C3-BDW.usage
1340681 ± 2% -32.0% 911539 ± 7% cpuidle.C6-BDW.usage
71772 ± 6% -9.0% 65308 ± 4% numa-meminfo.node0.Active
17749 ± 22% -32.7% 11947 ± 24% numa-meminfo.node0.Active(anon)
5249 ± 49% -66.0% 1786 ±123% numa-meminfo.node0.AnonHugePages
15662 ± 19% -27.6% 11342 ± 18% numa-meminfo.node0.AnonPages
503521 ± 6% -14.1% 432594 ± 7% numa-meminfo.node0.Writeback
69173 ± 5% -6.5% 64685 ± 5% numa-meminfo.node1.SUnreclaim
483701 ± 5% -14.7% 412545 ± 8% numa-meminfo.node1.Writeback
1.77 ± 1% -14.4% 1.52 ± 1% turbostat.%Busy
59.50 ± 1% -13.6% 51.40 ± 0% turbostat.Avg_MHz
6.52 ± 2% -17.7% 5.36 ± 2% turbostat.CPU%c1
1.47 ± 20% -82.3% 0.26 ± 23% turbostat.CPU%c3
0.24 ± 48% +52.4% 0.36 ± 5% turbostat.Pkg%pc6
93.44 ± 0% -2.6% 90.99 ± 0% turbostat.PkgWatt
58.25 ± 0% -1.8% 57.22 ± 1% turbostat.RAMWatt
675.50 ± 6% -46.5% 361.60 ± 7% proc-vmstat.kswapd_low_wmark_hit_quickly
7768 ± 2% -8.5% 7108 ± 1% proc-vmstat.nr_active_anon
7047 ± 3% -8.3% 6463 ± 1% proc-vmstat.nr_anon_pages
242465 ± 0% -12.8% 211436 ± 0% proc-vmstat.nr_writeback
1544 ± 5% -9.8% 1393 ± 1% proc-vmstat.pgactivate
1.058e+08 ± 0% -9.4% 95845408 ± 0% proc-vmstat.pgfree
1.015e+08 ± 0% -9.4% 91969429 ± 0% proc-vmstat.pgscan_kswapd_normal
1.014e+08 ± 0% -9.4% 91873948 ± 0% proc-vmstat.pgsteal_kswapd_normal
1.76 ± 3% -23.4% 1.35 ± 4% perf-profile.cycles-pp.blk_mq_make_request
18.28 ± 5% -14.4% 15.64 ± 7% perf-profile.cycles-pp.call_cpuidle
19.04 ± 5% -14.1% 16.34 ± 7% perf-profile.cycles-pp.cpu_startup_entry
18.27 ± 5% -14.5% 15.63 ± 7% perf-profile.cycles-pp.cpuidle_enter
15.93 ± 5% -15.3% 13.50 ± 9% perf-profile.cycles-pp.cpuidle_enter_state
1.79 ± 3% -22.5% 1.39 ± 2% perf-profile.cycles-pp.generic_make_request
15.70 ± 5% -16.1% 13.17 ± 10% perf-profile.cycles-pp.intel_idle
18.21 ± 5% -15.5% 15.40 ± 7% perf-profile.cycles-pp.start_secondary
1.79 ± 3% -22.5% 1.39 ± 2% perf-profile.cycles-pp.submit_bio
2.56 ± 0% -22.5% 1.99 ± 2% perf-profile.cycles-pp.xfs_submit_ioend
1.80 ± 3% -22.5% 1.39 ± 2% perf-profile.cycles-pp.xfs_submit_ioend_bio
6238 ± 1% -14.0% 5367 ± 3% slabinfo.kmalloc-1024.active_objs
6430 ± 1% -15.4% 5442 ± 3% slabinfo.kmalloc-1024.num_objs
25207 ± 3% -65.5% 8700 ± 2% slabinfo.kmalloc-128.active_objs
398.75 ± 3% -66.0% 135.40 ± 1% slabinfo.kmalloc-128.active_slabs
25559 ± 3% -65.9% 8709 ± 1% slabinfo.kmalloc-128.num_objs
398.75 ± 3% -66.0% 135.40 ± 1% slabinfo.kmalloc-128.num_slabs
14482 ± 0% -35.4% 9349 ± 1% slabinfo.kmalloc-192.active_objs
347.75 ± 0% -36.0% 222.40 ± 1% slabinfo.kmalloc-192.active_slabs
14626 ± 0% -36.0% 9360 ± 1% slabinfo.kmalloc-192.num_objs
347.75 ± 0% -36.0% 222.40 ± 1% slabinfo.kmalloc-192.num_slabs
1644 ± 1% -10.0% 1480 ± 1% slabinfo.kmalloc-4096.active_objs
1703 ± 1% -9.5% 1541 ± 1% slabinfo.kmalloc-4096.num_objs
22806 ± 2% -9.4% 20665 ± 3% slabinfo.kmalloc-512.active_objs
23061 ± 3% -10.4% 20670 ± 3% slabinfo.kmalloc-512.num_objs
49871 ± 0% -16.9% 41459 ± 0% slabinfo.kmalloc-64.active_objs
783.00 ± 0% -17.0% 649.80 ± 0% slabinfo.kmalloc-64.active_slabs
50152 ± 0% -17.1% 41598 ± 0% slabinfo.kmalloc-64.num_objs
783.00 ± 0% -17.0% 649.80 ± 0% slabinfo.kmalloc-64.num_slabs
22015 ± 1% -20.7% 17458 ± 4% slabinfo.kmalloc-96.active_objs
529.25 ± 1% -20.8% 419.40 ± 4% slabinfo.kmalloc-96.active_slabs
22233 ± 1% -20.7% 17632 ± 4% slabinfo.kmalloc-96.num_objs
529.25 ± 1% -20.8% 419.40 ± 4% slabinfo.kmalloc-96.num_slabs
34.21 ±154% -93.6% 2.21 ± 38% sched_debug.cfs_rq:/.runnable_load_avg.avg
2872 ±161% -96.2% 108.07 ± 44% sched_debug.cfs_rq:/.runnable_load_avg.max
306.53 ±160% -95.7% 13.25 ± 41% sched_debug.cfs_rq:/.runnable_load_avg.stddev
34.84 ± 9% -21.3% 27.43 ± 3% sched_debug.cfs_rq:/.util_avg.avg
471.10 ± 13% -17.3% 389.73 ± 6% sched_debug.cfs_rq:/.util_avg.max
83.48 ± 6% -18.1% 68.41 ± 3% sched_debug.cfs_rq:/.util_avg.stddev
350786 ± 3% +32.0% 463179 ± 9% sched_debug.cpu.avg_idle.min
165474 ± 9% -23.8% 126079 ± 12% sched_debug.cpu.avg_idle.stddev
167991 ± 2% +19.6% 200863 ± 0% sched_debug.cpu.clock.avg
167997 ± 2% +19.6% 200869 ± 0% sched_debug.cpu.clock.max
167942 ± 2% +19.6% 200832 ± 0% sched_debug.cpu.clock.min
8.46 ± 23% -28.8% 6.02 ± 21% sched_debug.cpu.clock.stddev
167991 ± 2% +19.6% 200863 ± 0% sched_debug.cpu.clock_task.avg
167997 ± 2% +19.6% 200869 ± 0% sched_debug.cpu.clock_task.max
167942 ± 2% +19.6% 200832 ± 0% sched_debug.cpu.clock_task.min
8.46 ± 23% -28.8% 6.02 ± 21% sched_debug.cpu.clock_task.stddev
3039 ±153% -92.6% 224.07 ± 35% sched_debug.cpu.cpu_load[0].max
2930 ±155% -93.4% 192.50 ± 23% sched_debug.cpu.cpu_load[1].max
316.33 ±152% -91.6% 26.43 ± 24% sched_debug.cpu.cpu_load[1].stddev
34.90 ±142% -87.2% 4.47 ± 12% sched_debug.cpu.cpu_load[2].avg
2792 ±157% -93.9% 169.40 ± 19% sched_debug.cpu.cpu_load[2].max
300.71 ±154% -92.6% 22.15 ± 17% sched_debug.cpu.cpu_load[2].stddev
32.00 ±144% -88.4% 3.70 ± 10% sched_debug.cpu.cpu_load[3].avg
2573 ±157% -94.2% 150.10 ± 23% sched_debug.cpu.cpu_load[3].max
276.30 ±155% -93.2% 18.91 ± 17% sched_debug.cpu.cpu_load[3].stddev
26.92 ±143% -87.8% 3.28 ± 7% sched_debug.cpu.cpu_load[4].avg
2160 ±157% -93.3% 145.60 ± 19% sched_debug.cpu.cpu_load[4].max
231.48 ±155% -92.5% 17.47 ± 15% sched_debug.cpu.cpu_load[4].stddev
12611 ± 2% -33.4% 8396 ± 2% sched_debug.cpu.nr_switches.avg
68299 ± 7% -37.6% 42632 ± 11% sched_debug.cpu.nr_switches.max
14204 ± 2% -38.9% 8675 ± 1% sched_debug.cpu.nr_switches.stddev
13.50 ± 13% +95.1% 26.33 ± 15% sched_debug.cpu.nr_uninterruptible.max
-30.65 ±-20% +63.9% -50.23 ±-17% sched_debug.cpu.nr_uninterruptible.min
6.48 ± 7% +55.3% 10.07 ± 7% sched_debug.cpu.nr_uninterruptible.stddev
167990 ± 2% +19.6% 200861 ± 0% sched_debug.cpu_clk
164597 ± 2% +19.9% 197433 ± 0% sched_debug.ktime
167990 ± 2% +19.6% 200861 ± 0% sched_debug.sched_clk



fio.write_bw_MBps

1850 ++-------------------------------------------------------------------+
| .*..*.*.. |
*. *..*..*.*..*..*.. .*..*..*..*.*..*..*..*.*..*.. .*.*..*..*
1800 ++ * *. |
| |
| |
1750 ++ |
| |
1700 ++ |
| |
| |
1650 ++ |
| |
O O O O O |
1600 ++-------------------------------------------------------------------+


fio.write_iops

470000 ++------*----------------------------------------------------------+
*..*.*. *.*..*..*.*..*.. .*.. .*. .*..*.*.. .*..*.*..*
460000 ++ *.*. *. *. *..*.*. |
| |
| |
450000 ++ |
| |
440000 ++ |
| |
430000 ++ |
| |
| |
420000 ++ |
O O O O O |
410000 ++-----------------------------------------------------------------+


turbostat.Avg_MHz

75 ++---------------------------------------------------------------------+
| |
| .*..*..*.*.. |
70 *+ |
| *..* |
| : |
65 ++ : |
| : |
60 ++ : .*.. .*..*.. .*.. .*
| *..*..* *..*. *.*. *..*..*..*.*..*..*. |
| |
55 ++ |
| |
O O O O O |
50 ++---------------------------------------------------------------------+


turbostat._Busy

2.2 ++--------------------------------------------------------------------+
| .*..*. |
2.1 *+ *..*.. |
2 ++ .* |
| *. : |
1.9 ++ : |
| : |
1.8 ++ : .*..*.. .*. .*
| *.*..*..*..*..* *..*. *..*..*..*..*.*..*. |
1.7 ++ |
1.6 ++ |
| O |
1.5 O+ O O O |
| |
1.4 ++--------------------------------------------------------------------+


turbostat.CPU_c3

2 ++--------------------*-----------------------------------------*-----+
1.8 ++ :+ :: |
| *.. : + *.. .*. : : |
1.6 ++ .. : *.. + *. .*.. *. *.. : : |
1.4 ++ *. .* * + *. .. *..*..*.. : *..*
| .. *..*. * * : |
1.2 *+.* * |
1 ++ |
0.8 ++ |
| |
0.6 ++ |
0.4 ++ |
| O O O |
0.2 O+ O |
0 ++--------------------------------------------------------------------+


fio.time.system_time

285 ++--------------------------------------------------------------------+
280 ++ *.. * |
| + *. .* :: |
275 ++ *. : .*.. * : : |
270 *+ : * *.. : : : : .*.*..*..|
| : .. : : .*.. : *.. .*. |
265 ++ *..* *.. : *. *.. : .*. *
260 ++ * * *. |
255 ++ |
| |
250 ++ |
245 ++ O |
| O O |
240 ++ O |
235 O+--------------------------------------------------------------------+


fio.time.percent_of_cpu_this_job_got

110 ++-*--------------------------------------------*---------------------+
108 ++. *.*..* * : : |
* : .*.. : : : : |
106 ++ : * *.. : : .*.. : * .*.*..*..|
104 ++ : .. : *. : + .*. *
| *..* *..* *..* + .*. |
102 ++ *. |
100 ++ |
98 ++ |
| |
96 ++ |
94 ++ |
| O O |
92 O+ O O |
90 ++--------------------------------------------------------------------+


fio.time.elapsed_time

302 ++--------------------------------------------------------------------+
O O O O O |
300 ++ |
298 ++ |
| |
296 ++ |
| |
294 ++ |
| |
292 ++ *.. .*.. |
290 ++ .. *.. .*.. .*.. *..*. |
| .*..*..*.*..* * *. *..*. .. *.*..*..*
288 *+. .*. * |
| *..*.*. |
286 ++--------------------------------------------------------------------+


fio.time.elapsed_time.max

302 ++--------------------------------------------------------------------+
O O O O O |
300 ++ |
298 ++ |
| |
296 ++ |
| |
294 ++ |
| |
292 ++ *.. .*.. |
290 ++ .. *.. .*.. .*.. *..*. |
| .*..*..*.*..* * *. *..*. .. *.*..*..*
288 *+. .*. * |
| *..*.*. |
286 ++--------------------------------------------------------------------+


fio.time.voluntary_context_switches

104000 ++-----------------------------------------------------------------+
| |
103000 O+ O O O O |
102000 ++ |
| |
101000 ++ |
| |
100000 ++ |
| |
99000 ++ .*. .*.. *.. .*.. |
98000 ++ *..*..*.*..*. *..*..*..* *..*. .. * *..*.*..*
| + * |
97000 *+.*.*..*..* |
| |
96000 ++-----------------------------------------------------------------+


fio.time.file_system_outputs

1.08e+09 ++---------------------------------------------------------------+
*..*.*..*.*..*..*.*..*.*..*..*.*..*.*..*..*.*..*.*..*..*.*..*.*..*
1.07e+09 ++ |
1.06e+09 ++ |
| |
1.05e+09 ++ |
1.04e+09 ++ |
| |
1.03e+09 ++ |
1.02e+09 ++ |
| |
1.01e+09 ++ |
1e+09 ++ |
O O O O O |
9.9e+08 ++---------------------------------------------------------------+



[*] bisect-good sample
[O] bisect-bad sample

To reproduce:

git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml



Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


Thanks,
Xiaolong
---
LKP_SERVER: inn
LKP_CGI_PORT: 80
LKP_CIFS_PORT: 139
testcase: fio-basic
default-monitors:
wait: activate-monitor
kmsg:
uptime:
iostat:
heartbeat:
vmstat:
numa-numastat:
numa-vmstat:
numa-meminfo:
proc-vmstat:
proc-stat:
interval: 10
meminfo:
slabinfo:
interrupts:
lock_stat:
latency_stats:
softirqs:
bdi_dev_mapping:
diskstats:
nfsstat:
cpuidle:
cpufreq-stats:
turbostat:
pmeter:
sched_debug:
interval: 60
cpufreq_governor: performance
NFS_HANG_DF_TIMEOUT: 200
NFS_HANG_CHECK_INTERVAL: 900
default-watchdogs:
oom-killer:
watchdog:
nfs-hang:
model: Broadwell-EP
memory: 128G
nr_ssd_partitions: 1
ssd_partitions: "/dev/nvme0n1p1"
swap_partitions:
rootfs_partition:
pmeter_server: lkp-os
pmeter_device: yokogawa-wt310
category: benchmark
disk: 1SSD
fs: xfs
nr_task: 8
fio-setup-basic:
runtime: 300
rw: write
bs: 4k
ioengine: sync
test_size: 512g
perf-profile:
freq: 1000
fio:
branch: block/wb-buf-throttle
commit: 074e46e1aded0ac0474a5db4d50d514d9dc42e78
queue: yhuang
testbox: lkp-bdw-ep2
tbox_group: lkp-bdw-ep2
kconfig: x86_64-rhel
enqueue_time: 2016-05-17 12:55:38.442979777 +08:00
compiler: gcc-4.9
rootfs: debian-x86_64-2015-02-07.cgz
id: ef0537fdc85d90b21f409977fff302f32b5de098
user: lkp
kernel: "/pkg/linux/x86_64-rhel/gcc-4.9/074e46e1aded0ac0474a5db4d50d514d9dc42e78/vmlinuz-4.6.0-rc3-00213-g074e46e"
result_root: "/result/fio-basic/performance-1SSD-xfs-8-300-write-4k-sync-512g/lkp-bdw-ep2/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/074e46e1aded0ac0474a5db4d50d514d9dc42e78/0"
job_file: "/lkp/scheduled/lkp-bdw-ep2/yhuang_fio-basic-performance-1SSD-xfs-8-300-write-4k-sync-512g-debian-x86_64-2015-02-07.cgz-x86_64-rhel-074e46e1aded0ac0474a5db4d50d514d9dc42e78-20160517-26269-rlqpcg-0.yaml"
dequeue_time: 2016-05-17 13:39:29.116268674 +08:00
nr_cpu: "$(nproc)"
max_uptime: 3600
initrd: "/osimage/debian/debian-x86_64-2015-02-07.cgz"
bootloader_append:
- root=/dev/ram0
- user=lkp
- job=/lkp/scheduled/lkp-bdw-ep2/yhuang_fio-basic-performance-1SSD-xfs-8-300-write-4k-sync-512g-debian-x86_64-2015-02-07.cgz-x86_64-rhel-074e46e1aded0ac0474a5db4d50d514d9dc42e78-20160517-26269-rlqpcg-0.yaml
- ARCH=x86_64
- kconfig=x86_64-rhel
- branch=block/wb-buf-throttle
- commit=074e46e1aded0ac0474a5db4d50d514d9dc42e78
- BOOT_IMAGE=/pkg/linux/x86_64-rhel/gcc-4.9/074e46e1aded0ac0474a5db4d50d514d9dc42e78/vmlinuz-4.6.0-rc3-00213-g074e46e
- max_uptime=3600
- RESULT_ROOT=/result/fio-basic/performance-1SSD-xfs-8-300-write-4k-sync-512g/lkp-bdw-ep2/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/074e46e1aded0ac0474a5db4d50d514d9dc42e78/0
- LKP_SERVER=inn
- |2-


earlyprintk=ttyS0,115200 systemd.log_level=err
debug apic=debug sysrq_always_enabled rcupdate.rcu_cpu_stall_timeout=100
panic=-1 softlockup_panic=1 nmi_watchdog=panic oops=panic load_ramdisk=2 prompt_ramdisk=0
console=ttyS0,115200 console=tty0 vga=normal

rw
lkp_initrd: "/lkp/lkp/lkp-x86_64.cgz"
modules_initrd: "/pkg/linux/x86_64-rhel/gcc-4.9/074e46e1aded0ac0474a5db4d50d514d9dc42e78/modules.cgz"
bm_initrd: "/osimage/deps/debian-x86_64-2015-02-07.cgz/lkp.cgz,/osimage/deps/debian-x86_64-2015-02-07.cgz/run-ipconfig.cgz,/osimage/deps/debian-x86_64-2015-02-07.cgz/turbostat.cgz,/lkp/benchmarks/turbostat.cgz,/osimage/deps/debian-x86_64-2015-02-07.cgz/fs.cgz,/lkp/benchmarks/perf-profile-x86_64.cgz,/osimage/deps/debian-x86_64-2015-02-07.cgz/fio.cgz"
linux_headers_initrd: "/pkg/linux/x86_64-rhel/gcc-4.9/074e46e1aded0ac0474a5db4d50d514d9dc42e78/linux-headers.cgz"
job_state: finished
loadavg: 8.48 5.75 2.54 3/966 9501
start_time: '1463463644'
end_time: '1463463944'
version: "/lkp/lkp/.src-20160517-133127"
2016-05-17 13:40:36 echo performance > /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor
2016-05-17 13:40:36 echo performance > /sys/devices/system/cpu/cpu1/cpufreq/scaling_governor
2016-05-17 13:40:36 echo performance > /sys/devices/system/cpu/cpu10/cpufreq/scaling_governor
2016-05-17 13:40:36 echo performance > /sys/devices/system/cpu/cpu11/cpufreq/scaling_governor
2016-05-17 13:40:36 echo performance > /sys/devices/system/cpu/cpu12/cpufreq/scaling_governor
2016-05-17 13:40:36 echo performance > /sys/devices/system/cpu/cpu13/cpufreq/scaling_governor
2016-05-17 13:40:36 echo performance > /sys/devices/system/cpu/cpu14/cpufreq/scaling_governor
2016-05-17 13:40:36 echo performance > /sys/devices/system/cpu/cpu15/cpufreq/scaling_governor
2016-05-17 13:40:36 echo performance > /sys/devices/system/cpu/cpu16/cpufreq/scaling_governor
2016-05-17 13:40:36 echo performance > /sys/devices/system/cpu/cpu17/cpufreq/scaling_governor
2016-05-17 13:40:36 echo performance > /sys/devices/system/cpu/cpu18/cpufreq/scaling_governor
2016-05-17 13:40:36 echo performance > /sys/devices/system/cpu/cpu19/cpufreq/scaling_governor
2016-05-17 13:40:36 echo performance > /sys/devices/system/cpu/cpu2/cpufreq/scaling_governor
2016-05-17 13:40:36 echo performance > /sys/devices/system/cpu/cpu20/cpufreq/scaling_governor
2016-05-17 13:40:36 echo performance > /sys/devices/system/cpu/cpu21/cpufreq/scaling_governor
2016-05-17 13:40:37 echo performance > /sys/devices/system/cpu/cpu22/cpufreq/scaling_governor
2016-05-17 13:40:37 echo performance > /sys/devices/system/cpu/cpu23/cpufreq/scaling_governor
2016-05-17 13:40:37 echo performance > /sys/devices/system/cpu/cpu24/cpufreq/scaling_governor
2016-05-17 13:40:37 echo performance > /sys/devices/system/cpu/cpu25/cpufreq/scaling_governor
2016-05-17 13:40:37 echo performance > /sys/devices/system/cpu/cpu26/cpufreq/scaling_governor
2016-05-17 13:40:37 echo performance > /sys/devices/system/cpu/cpu27/cpufreq/scaling_governor
2016-05-17 13:40:37 echo performance > /sys/devices/system/cpu/cpu28/cpufreq/scaling_governor
2016-05-17 13:40:37 echo performance > /sys/devices/system/cpu/cpu29/cpufreq/scaling_governor
2016-05-17 13:40:37 echo performance > /sys/devices/system/cpu/cpu3/cpufreq/scaling_governor
2016-05-17 13:40:37 echo performance > /sys/devices/system/cpu/cpu30/cpufreq/scaling_governor
2016-05-17 13:40:37 echo performance > /sys/devices/system/cpu/cpu31/cpufreq/scaling_governor
2016-05-17 13:40:37 echo performance > /sys/devices/system/cpu/cpu32/cpufreq/scaling_governor
2016-05-17 13:40:37 echo performance > /sys/devices/system/cpu/cpu33/cpufreq/scaling_governor
2016-05-17 13:40:37 echo performance > /sys/devices/system/cpu/cpu34/cpufreq/scaling_governor
2016-05-17 13:40:37 echo performance > /sys/devices/system/cpu/cpu35/cpufreq/scaling_governor
2016-05-17 13:40:37 echo performance > /sys/devices/system/cpu/cpu36/cpufreq/scaling_governor
2016-05-17 13:40:37 echo performance > /sys/devices/system/cpu/cpu37/cpufreq/scaling_governor
2016-05-17 13:40:37 echo performance > /sys/devices/system/cpu/cpu38/cpufreq/scaling_governor
2016-05-17 13:40:37 echo performance > /sys/devices/system/cpu/cpu39/cpufreq/scaling_governor
2016-05-17 13:40:37 echo performance > /sys/devices/system/cpu/cpu4/cpufreq/scaling_governor
2016-05-17 13:40:37 echo performance > /sys/devices/system/cpu/cpu40/cpufreq/scaling_governor
2016-05-17 13:40:37 echo performance > /sys/devices/system/cpu/cpu41/cpufreq/scaling_governor
2016-05-17 13:40:37 echo performance > /sys/devices/system/cpu/cpu42/cpufreq/scaling_governor
2016-05-17 13:40:37 echo performance > /sys/devices/system/cpu/cpu43/cpufreq/scaling_governor
2016-05-17 13:40:37 echo performance > /sys/devices/system/cpu/cpu44/cpufreq/scaling_governor
2016-05-17 13:40:37 echo performance > /sys/devices/system/cpu/cpu45/cpufreq/scaling_governor
2016-05-17 13:40:37 echo performance > /sys/devices/system/cpu/cpu46/cpufreq/scaling_governor
2016-05-17 13:40:37 echo performance > /sys/devices/system/cpu/cpu47/cpufreq/scaling_governor
2016-05-17 13:40:37 echo performance > /sys/devices/system/cpu/cpu48/cpufreq/scaling_governor
2016-05-17 13:40:37 echo performance > /sys/devices/system/cpu/cpu49/cpufreq/scaling_governor
2016-05-17 13:40:37 echo performance > /sys/devices/system/cpu/cpu5/cpufreq/scaling_governor
2016-05-17 13:40:37 echo performance > /sys/devices/system/cpu/cpu50/cpufreq/scaling_governor
2016-05-17 13:40:37 echo performance > /sys/devices/system/cpu/cpu51/cpufreq/scaling_governor
2016-05-17 13:40:37 echo performance > /sys/devices/system/cpu/cpu52/cpufreq/scaling_governor
2016-05-17 13:40:37 echo performance > /sys/devices/system/cpu/cpu53/cpufreq/scaling_governor
2016-05-17 13:40:37 echo performance > /sys/devices/system/cpu/cpu54/cpufreq/scaling_governor
2016-05-17 13:40:37 echo performance > /sys/devices/system/cpu/cpu55/cpufreq/scaling_governor
2016-05-17 13:40:37 echo performance > /sys/devices/system/cpu/cpu56/cpufreq/scaling_governor
2016-05-17 13:40:37 echo performance > /sys/devices/system/cpu/cpu57/cpufreq/scaling_governor
2016-05-17 13:40:37 echo performance > /sys/devices/system/cpu/cpu58/cpufreq/scaling_governor
2016-05-17 13:40:37 echo performance > /sys/devices/system/cpu/cpu59/cpufreq/scaling_governor
2016-05-17 13:40:37 echo performance > /sys/devices/system/cpu/cpu6/cpufreq/scaling_governor
2016-05-17 13:40:37 echo performance > /sys/devices/system/cpu/cpu60/cpufreq/scaling_governor
2016-05-17 13:40:37 echo performance > /sys/devices/system/cpu/cpu61/cpufreq/scaling_governor
2016-05-17 13:40:37 echo performance > /sys/devices/system/cpu/cpu62/cpufreq/scaling_governor
2016-05-17 13:40:37 echo performance > /sys/devices/system/cpu/cpu63/cpufreq/scaling_governor
2016-05-17 13:40:37 echo performance > /sys/devices/system/cpu/cpu64/cpufreq/scaling_governor
2016-05-17 13:40:37 echo performance > /sys/devices/system/cpu/cpu65/cpufreq/scaling_governor
2016-05-17 13:40:37 echo performance > /sys/devices/system/cpu/cpu66/cpufreq/scaling_governor
2016-05-17 13:40:37 echo performance > /sys/devices/system/cpu/cpu67/cpufreq/scaling_governor
2016-05-17 13:40:37 echo performance > /sys/devices/system/cpu/cpu68/cpufreq/scaling_governor
2016-05-17 13:40:37 echo performance > /sys/devices/system/cpu/cpu69/cpufreq/scaling_governor
2016-05-17 13:40:37 echo performance > /sys/devices/system/cpu/cpu7/cpufreq/scaling_governor
2016-05-17 13:40:37 echo performance > /sys/devices/system/cpu/cpu70/cpufreq/scaling_governor
2016-05-17 13:40:37 echo performance > /sys/devices/system/cpu/cpu71/cpufreq/scaling_governor
2016-05-17 13:40:37 echo performance > /sys/devices/system/cpu/cpu72/cpufreq/scaling_governor
2016-05-17 13:40:37 echo performance > /sys/devices/system/cpu/cpu73/cpufreq/scaling_governor
2016-05-17 13:40:37 echo performance > /sys/devices/system/cpu/cpu74/cpufreq/scaling_governor
2016-05-17 13:40:37 echo performance > /sys/devices/system/cpu/cpu75/cpufreq/scaling_governor
2016-05-17 13:40:37 echo performance > /sys/devices/system/cpu/cpu76/cpufreq/scaling_governor
2016-05-17 13:40:37 echo performance > /sys/devices/system/cpu/cpu77/cpufreq/scaling_governor
2016-05-17 13:40:37 echo performance > /sys/devices/system/cpu/cpu78/cpufreq/scaling_governor
2016-05-17 13:40:37 echo performance > /sys/devices/system/cpu/cpu79/cpufreq/scaling_governor
2016-05-17 13:40:37 echo performance > /sys/devices/system/cpu/cpu8/cpufreq/scaling_governor
2016-05-17 13:40:37 echo performance > /sys/devices/system/cpu/cpu80/cpufreq/scaling_governor
2016-05-17 13:40:37 echo performance > /sys/devices/system/cpu/cpu81/cpufreq/scaling_governor
2016-05-17 13:40:37 echo performance > /sys/devices/system/cpu/cpu82/cpufreq/scaling_governor
2016-05-17 13:40:37 echo performance > /sys/devices/system/cpu/cpu83/cpufreq/scaling_governor
2016-05-17 13:40:37 echo performance > /sys/devices/system/cpu/cpu84/cpufreq/scaling_governor
2016-05-17 13:40:37 echo performance > /sys/devices/system/cpu/cpu85/cpufreq/scaling_governor
2016-05-17 13:40:37 echo performance > /sys/devices/system/cpu/cpu86/cpufreq/scaling_governor
2016-05-17 13:40:37 echo performance > /sys/devices/system/cpu/cpu87/cpufreq/scaling_governor
2016-05-17 13:40:37 echo performance > /sys/devices/system/cpu/cpu9/cpufreq/scaling_governor
2016-05-17 13:40:38 mkfs -t xfs /dev/nvme0n1p1
2016-05-17 13:40:42 mount -t xfs -o nobarrier,inode64 /dev/nvme0n1p1 /fs/nvme0n1p1
2016-05-17 13:40:44 echo "
[global]
bs=4k
ioengine=sync
iodepth=32
size=68719476736
direct=0
runtime=300
invalidate=1
fallocate=posix
directory=/fs/nvme0n1p1
group_reporting

[task]
rw=write
numjobs=8" | fio --output-format=json -