[lkp] [cpuidle] e132b9b3bc: No primary change, turbostat.%Busy-65.1% change

From: kernel test robot
Date: Mon Mar 28 2016 - 02:23:46 EST


FYI, we noticed that turbostat.%Busy -65.1% change with your commit.

https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
commit e132b9b3bc7f19e9b158e42b323881d5dee5ecf3 ("cpuidle: menu: use high confidence factors only when considering polling")


=========================================================================================
compiler/disk/filesize/fs/iterations/kconfig/nr_directories/nr_files_per_directory/nr_threads/rootfs/sync_method/tbox_group/test_size/testcase:
gcc-4.9/1HDD/5K/btrfs/1x/x86_64-rhel/16d/256fpd/32t/debian-x86_64-2015-02-07.cgz/fsyncBeforeClose/lkp-ws02/400M/fsmark

commit:
3b99669b75db04e411bb298591224a9e8e4f57fb
e132b9b3bc7f19e9b158e42b323881d5dee5ecf3

3b99669b75db04e4 e132b9b3bc7f19e9b158e42b32
---------------- --------------------------
%stddev %change %stddev
\ | \
505.00 ± 7% +83.6% 927.00 ± 4% vmstat.memory.buff
6392 ± 35% +62.4% 10382 ± 0% numa-meminfo.node0.Mapped
2646 ±130% +226.2% 8631 ± 0% numa-meminfo.node0.Shmem
9065 ± 25% -44.8% 5008 ± 1% numa-meminfo.node1.Mapped
26.78 ± 1% -65.1% 9.34 ± 0% turbostat.%Busy
709.50 ± 1% -65.2% 246.75 ± 0% turbostat.Avg_MHz
40.46 ± 1% +39.7% 56.54 ± 1% turbostat.CPU%c1
1597 ± 35% +62.4% 2594 ± 0% numa-vmstat.node0.nr_mapped
661.25 ±130% +226.3% 2157 ± 0% numa-vmstat.node0.nr_shmem
106.00 ± 39% +220.0% 339.25 ± 61% numa-vmstat.node0.numa_other
2266 ± 25% -44.8% 1251 ± 1% numa-vmstat.node1.nr_mapped
4.795e+08 ± 4% +117.9% 1.045e+09 ± 2% cpuidle.C1-NHM.time
463937 ± 2% +73.2% 803714 ± 2% cpuidle.C1-NHM.usage
1.699e+08 ± 3% -8.6% 1.553e+08 ± 1% cpuidle.C1E-NHM.time
7.062e+08 ± 0% -84.0% 1.131e+08 ± 5% cpuidle.POLL.time
440162 ± 1% -79.7% 89501 ± 6% cpuidle.POLL.usage
0.00 ± -1% +Inf% 8824 ± 70% latency_stats.avg.blk_execute_rq.scsi_execute.scsi_execute_req_flags.ses_recv_diag.[ses].ses_enclosure_data_process.[ses].ses_match_to_enclosure.[ses].ses_intf_add.[ses].class_interface_register.scsi_register_interface.0xffffffffa0006013.do_one_initcall.do_init_module
0.00 ± -1% +Inf% 12106 ± 71% latency_stats.avg.blk_execute_rq.scsi_execute.scsi_execute_req_flags.ses_recv_diag.[ses].ses_get_page2_descriptor.[ses].ses_get_power_status.[ses].ses_enclosure_data_process.[ses].ses_match_to_enclosure.[ses].ses_intf_add.[ses].class_interface_register.scsi_register_interface.0xffffffffa0006013
0.00 ± -1% +Inf% 15174 ± 70% latency_stats.max.blk_execute_rq.scsi_execute.scsi_execute_req_flags.ses_recv_diag.[ses].ses_enclosure_data_process.[ses].ses_match_to_enclosure.[ses].ses_intf_add.[ses].class_interface_register.scsi_register_interface.0xffffffffa0006013.do_one_initcall.do_init_module
0.00 ± -1% +Inf% 108570 ± 80% latency_stats.max.blk_execute_rq.scsi_execute.scsi_execute_req_flags.ses_recv_diag.[ses].ses_get_page2_descriptor.[ses].ses_get_power_status.[ses].ses_enclosure_data_process.[ses].ses_match_to_enclosure.[ses].ses_intf_add.[ses].class_interface_register.scsi_register_interface.0xffffffffa0006013
0.00 ± -1% +Inf% 92833 ± 71% latency_stats.sum.blk_execute_rq.scsi_execute.scsi_execute_req_flags.ses_recv_diag.[ses].ses_enclosure_data_process.[ses].ses_match_to_enclosure.[ses].ses_intf_add.[ses].class_interface_register.scsi_register_interface.0xffffffffa0006013.do_one_initcall.do_init_module
0.00 ± -1% +Inf% 811913 ± 70% latency_stats.sum.blk_execute_rq.scsi_execute.scsi_execute_req_flags.ses_recv_diag.[ses].ses_get_page2_descriptor.[ses].ses_get_power_status.[ses].ses_enclosure_data_process.[ses].ses_match_to_enclosure.[ses].ses_intf_add.[ses].class_interface_register.scsi_register_interface.0xffffffffa0006013
-9221 ±-11% -19.9% -7385 ±-11% sched_debug.cfs_rq:/.spread0.avg
591.90 ± 62% +112.9% 1260 ± 30% sched_debug.cfs_rq:/.spread0.max
-14064 ± -6% -11.1% -12500 ± -6% sched_debug.cfs_rq:/.spread0.min
306.42 ± 40% -41.9% 178.00 ± 8% sched_debug.cpu.load.max
75.40 ± 31% -33.6% 50.09 ± 13% sched_debug.cpu.load.stddev
714.67 ± 1% -9.9% 644.00 ± 5% sched_debug.cpu.nr_uninterruptible.max
1149 ± 9% -15.9% 967.25 ± 3% slabinfo.avc_xperms_node.active_objs
1149 ± 9% -15.9% 967.25 ± 3% slabinfo.avc_xperms_node.num_objs
1020 ± 8% +28.2% 1308 ± 3% slabinfo.btrfs_trans_handle.active_objs
1020 ± 8% +28.2% 1308 ± 3% slabinfo.btrfs_trans_handle.num_objs
351.75 ± 11% +39.2% 489.50 ± 8% slabinfo.btrfs_transaction.active_objs
351.75 ± 11% +39.2% 489.50 ± 8% slabinfo.btrfs_transaction.num_objs
544.00 ± 10% +20.6% 656.00 ± 12% slabinfo.kmem_cache_node.active_objs
544.00 ± 10% +20.6% 656.00 ± 12% slabinfo.kmem_cache_node.num_objs


lkp-ws02: Westmere-EP
Memory: 16G




turbostat.Avg_MHz

800 ++--------------------------------------------------------------------+
**.****.****.* **.****.****.** *.****.****.****.****.****.** *. * |
700 ++ * * * ** *.***
| |
600 ++ |
| |
500 ++ |
| |
400 ++ |
| |
300 ++ OO O O O |
| O OO O O O OO OOOO OOO |
200 ++ |
OO OOOO OOO |
100 ++--------------------------------------------------------------------+


turbostat._Busy

30 ++---------------------------------------------------------------------+
**.****.****.* **.***.****.*** .***.* *.****.****.***.****.* * * |
| * * ** * *.* **.**
25 ++ |
| |
| |
20 ++ |
| |
15 ++ |
| |
| |
10 ++ O OOOO OOO OOOO |
| OOOO OOO O |
OO OOOO O O |
5 ++-------O-------------------------------------------------------------+


turbostat.CPU_c1

65 ++---------------------------------------------------------------------+
| O |
60 OO OO O O |
| O O |
| OOO OO O O OOOO OOO O |
55 ++ O O O |
| O O |
50 ++ |
| |
45 ++ |
| |
**.****. *. ***.***.****.****.***.* **.****.****.***.****.****.****. |
40 ++ *** * * **
| |
35 ++---------------------------------------------------------------------+

[*] bisect-good sample
[O] bisect-bad sample

To reproduce:

git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml


Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


Thanks,
Xiaolong Ye
---
LKP_SERVER: inn
LKP_CGI_PORT: 80
LKP_CIFS_PORT: 139
testcase: fsmark
default-monitors:
wait: activate-monitor
kmsg:
uptime:
iostat:
heartbeat:
vmstat:
numa-numastat:
numa-vmstat:
numa-meminfo:
proc-vmstat:
proc-stat:
interval: 10
meminfo:
slabinfo:
interrupts:
lock_stat:
latency_stats:
softirqs:
bdi_dev_mapping:
diskstats:
nfsstat:
cpuidle:
cpufreq-stats:
turbostat:
pmeter:
sched_debug:
interval: 60
cpufreq_governor:
default-watchdogs:
oom-killer:
watchdog:
commit: e132b9b3bc7f19e9b158e42b323881d5dee5ecf3
model: Westmere-EP
memory: 16G
nr_hdd_partitions: 10
hdd_partitions: "/dev/disk/by-id/scsi-35000c500*-part1"
swap_partitions:
rootfs_partition: "/dev/disk/by-id/ata-WDC_WD1002FAEX-00Z3A0_WD-WCATR5408564-part3"
category: benchmark
iterations: 1x
nr_threads: 32t
disk: 1HDD
fs: btrfs
fs2:
fsmark:
filesize: 5K
test_size: 400M
sync_method: fsyncBeforeClose
nr_directories: 16d
nr_files_per_directory: 256fpd
queue: bisect
testbox: lkp-ws02
tbox_group: lkp-ws02
kconfig: x86_64-rhel
enqueue_time: 2016-03-27 16:21:07.892606177 +08:00
compiler: gcc-4.9
rootfs: debian-x86_64-2015-02-07.cgz
id: 052d3b30a115d16c93009b42c032314044e0f600
user: lkp
head_commit: 390fc3e59c3a8d9938e58a7c995caa36bf868958
base_commit: b562e44f507e863c6792946e4e1b1449fbbac85d
branch: linux-devel/devel-hourly-2016032007
result_root: "/result/fsmark/1x-32t-1HDD-btrfs-5K-400M-fsyncBeforeClose-16d-256fpd/lkp-ws02/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/e132b9b3bc7f19e9b158e42b323881d5dee5ecf3/0"
job_file: "/lkp/scheduled/lkp-ws02/bisect_fsmark-1x-32t-1HDD-btrfs-5K-400M-fsyncBeforeClose-16d-256fpd-debian-x86_64-2015-02-07.cgz-x86_64-rhel-e132b9b3bc7f19e9b158e42b323881d5dee5ecf3-20160327-11856-1ve4oen-0.yaml"
nr_cpu: "$(nproc)"
max_uptime: 967.0
initrd: "/osimage/debian/debian-x86_64-2015-02-07.cgz"
bootloader_append:
- root=/dev/ram0
- user=lkp
- job=/lkp/scheduled/lkp-ws02/bisect_fsmark-1x-32t-1HDD-btrfs-5K-400M-fsyncBeforeClose-16d-256fpd-debian-x86_64-2015-02-07.cgz-x86_64-rhel-e132b9b3bc7f19e9b158e42b323881d5dee5ecf3-20160327-11856-1ve4oen-0.yaml
- ARCH=x86_64
- kconfig=x86_64-rhel
- branch=linux-devel/devel-hourly-2016032007
- commit=e132b9b3bc7f19e9b158e42b323881d5dee5ecf3
- BOOT_IMAGE=/pkg/linux/x86_64-rhel/gcc-4.9/e132b9b3bc7f19e9b158e42b323881d5dee5ecf3/vmlinuz-4.5.0-rc4-00003-ge132b9b
- max_uptime=967
- RESULT_ROOT=/result/fsmark/1x-32t-1HDD-btrfs-5K-400M-fsyncBeforeClose-16d-256fpd/lkp-ws02/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/e132b9b3bc7f19e9b158e42b323881d5dee5ecf3/0
- LKP_SERVER=inn
- |-
ipmi_watchdog.start_now=1

earlyprintk=ttyS0,115200 systemd.log_level=err
debug apic=debug sysrq_always_enabled rcupdate.rcu_cpu_stall_timeout=100
panic=-1 softlockup_panic=1 nmi_watchdog=panic oops=panic load_ramdisk=2 prompt_ramdisk=0
console=ttyS0,115200 console=tty0 vga=normal

rw
lkp_initrd: "/lkp/lkp/lkp-x86_64.cgz"
modules_initrd: "/pkg/linux/x86_64-rhel/gcc-4.9/e132b9b3bc7f19e9b158e42b323881d5dee5ecf3/modules.cgz"
bm_initrd: "/osimage/deps/debian-x86_64-2015-02-07.cgz/lkp.cgz,/osimage/deps/debian-x86_64-2015-02-07.cgz/run-ipconfig.cgz,/osimage/deps/debian-x86_64-2015-02-07.cgz/turbostat.cgz,/lkp/benchmarks/turbostat.cgz,/osimage/deps/debian-x86_64-2015-02-07.cgz/fs.cgz,/osimage/deps/debian-x86_64-2015-02-07.cgz/fs2.cgz,/lkp/benchmarks/fsmark.cgz"
linux_headers_initrd: "/pkg/linux/x86_64-rhel/gcc-4.9/e132b9b3bc7f19e9b158e42b323881d5dee5ecf3/linux-headers.cgz"
repeat_to: 2
kernel: "/pkg/linux/x86_64-rhel/gcc-4.9/e132b9b3bc7f19e9b158e42b323881d5dee5ecf3/vmlinuz-4.5.0-rc4-00003-ge132b9b"
dequeue_time: 2016-03-27 16:26:57.703409607 +08:00
job_state: finished
loadavg: 26.72 11.99 4.56 1/396 4953
start_time: '1459067277'
end_time: '1459067418'
version: "/lkp/lkp/.src-20160325-205817"
2016-03-27 16:27:56 mkfs -t btrfs /dev/sdd1
2016-03-27 16:27:56 mount -t btrfs /dev/sdd1 /fs/sdd1
2016-03-27 16:27:57 ./fs_mark -d /fs/sdd1/1 -d /fs/sdd1/2 -d /fs/sdd1/3 -d /fs/sdd1/4 -d /fs/sdd1/5 -d /fs/sdd1/6 -d /fs/sdd1/7 -d /fs/sdd1/8 -d /fs/sdd1/9 -d /fs/sdd1/10 -d /fs/sdd1/11 -d /fs/sdd1/12 -d /fs/sdd1/13 -d /fs/sdd1/14 -d /fs/sdd1/15 -d /fs/sdd1/16 -d /fs/sdd1/17 -d /fs/sdd1/18 -d /fs/sdd1/19 -d /fs/sdd1/20 -d /fs/sdd1/21 -d /fs/sdd1/22 -d /fs/sdd1/23 -d /fs/sdd1/24 -d /fs/sdd1/25 -d /fs/sdd1/26 -d /fs/sdd1/27 -d /fs/sdd1/28 -d /fs/sdd1/29 -d /fs/sdd1/30 -d /fs/sdd1/31 -d /fs/sdd1/32 -D 16 -N 256 -n 2560 -L 1 -S 1 -s 5120