[LKP] [SUNRPC] c4a7ca77494: +6.0% fsmark.time.involuntary_context_switches, no primary result change

From: Huang Ying
Date: Sun Feb 15 2015 - 02:57:16 EST


FYI, we noticed the below changes on

commit c4a7ca774949960064dac11b326908f28407e8c3 ("SUNRPC: Allow waiting on memory allocation")


testbox/testcase/testparams: nhm4/fsmark/performance-1x-32t-1HDD-f2fs-nfsv4-8K-400M-fsyncBeforeClose-16d-256fpd

127b21b89f9d8ba0 c4a7ca774949960064dac11b32
---------------- --------------------------
%stddev %change %stddev
\ | \
52524 Â 0% +6.0% 55672 Â 0% fsmark.time.involuntary_context_switches
436 Â 14% +54.9% 676 Â 20% sched_debug.cfs_rq[0]:/.tg_load_contrib
433 Â 15% +54.7% 670 Â 21% sched_debug.cfs_rq[0]:/.blocked_load_avg
8348 Â 7% +27.0% 10602 Â 9% sched_debug.cfs_rq[0]:/.min_vruntime
190081 Â 13% +32.7% 252269 Â 13% sched_debug.cpu#0.sched_goidle
205783 Â 12% +30.2% 267903 Â 13% sched_debug.cpu#0.ttwu_local
464065 Â 11% +26.6% 587524 Â 12% sched_debug.cpu#0.nr_switches
464278 Â 11% +26.6% 587734 Â 12% sched_debug.cpu#0.sched_count
15807 Â 11% +19.6% 18910 Â 12% sched_debug.cpu#4.nr_load_updates
300041 Â 8% +20.3% 360969 Â 10% sched_debug.cpu#0.ttwu_count
1863 Â 9% +18.1% 2201 Â 10% sched_debug.cfs_rq[4]:/.exec_clock

testbox/testcase/testparams: nhm4/fsmark/performance-1x-32t-1HDD-btrfs-nfsv4-8K-400M-fsyncBeforeClose-16d-256fpd

127b21b89f9d8ba0 c4a7ca774949960064dac11b32
---------------- --------------------------
fail:runs %reproduction fail:runs
| | |
52184 Â 0% +5.6% 55122 Â 0% fsmark.time.involuntary_context_switches
557 Â 19% +21.5% 677 Â 9% sched_debug.cfs_rq[5]:/.blocked_load_avg
217 Â 19% -42.9% 124 Â 21% sched_debug.cfs_rq[2]:/.load
45852 Â 14% -39.4% 27773 Â 24% sched_debug.cpu#7.ttwu_local
457 Â 18% +50.1% 686 Â 20% sched_debug.cfs_rq[0]:/.tg_load_contrib
455 Â 18% +46.7% 668 Â 19% sched_debug.cfs_rq[0]:/.blocked_load_avg
66605 Â 10% -26.7% 48826 Â 14% sched_debug.cpu#7.sched_goidle
78249 Â 9% -22.5% 60678 Â 11% sched_debug.cpu#7.ttwu_count
153506 Â 9% -22.7% 118649 Â 12% sched_debug.cpu#7.nr_switches
153613 Â 9% -22.7% 118755 Â 12% sched_debug.cpu#7.sched_count
15806 Â 6% +19.2% 18833 Â 18% sched_debug.cpu#4.nr_load_updates
2171 Â 5% +15.6% 2510 Â 13% sched_debug.cfs_rq[4]:/.exec_clock
9924 Â 11% -27.0% 7244 Â 25% sched_debug.cfs_rq[3]:/.min_vruntime
3156 Â 4% -13.4% 2734 Â 8% sched_debug.cfs_rq[7]:/.min_vruntime

testbox/testcase/testparams: nhm4/fsmark/performance-1x-32t-1HDD-ext4-nfsv4-9B-400M-fsyncBeforeClose-16d-256fpd

127b21b89f9d8ba0 c4a7ca774949960064dac11b32
---------------- --------------------------
104802 Â 0% +7.7% 112883 Â 0% fsmark.time.involuntary_context_switches
471755 Â 0% -1.3% 465592 Â 0% fsmark.time.voluntary_context_switches
1977 Â 36% +90.8% 3771 Â 8% sched_debug.cpu#4.curr->pid
2 Â 34% +80.0% 4 Â 24% sched_debug.cpu#6.cpu_load[1]
4 Â 33% +83.3% 8 Â 31% sched_debug.cpu#6.cpu_load[0]
193 Â 17% +48.0% 286 Â 19% sched_debug.cfs_rq[2]:/.blocked_load_avg
196 Â 17% +47.5% 290 Â 19% sched_debug.cfs_rq[2]:/.tg_load_contrib
96 Â 18% +40.6% 135 Â 11% sched_debug.cfs_rq[7]:/.load
97 Â 18% +38.5% 135 Â 11% sched_debug.cpu#7.load
2274 Â 7% -16.5% 1898 Â 3% proc-vmstat.pgalloc_dma
319 Â 6% -29.7% 224 Â 24% sched_debug.cfs_rq[1]:/.tg_load_contrib
314 Â 5% -29.4% 222 Â 25% sched_debug.cfs_rq[1]:/.blocked_load_avg
621 Â 10% +41.9% 881 Â 37% sched_debug.cfs_rq[4]:/.avg->runnable_avg_sum

nhm4: Nehalem
Memory: 4G




fsmark.time.involuntary_context_switches

114000 ++-----------------------------------------------------------------+
113000 O+ O O O O O O O O O O O O O O O O O |
| O O O O O
112000 ++ |
111000 ++ |
| |
110000 ++ |
109000 ++ |
108000 ++ |
| |
107000 ++ |
106000 ++ |
| |
105000 *+.*..*..*..*..*..*..*..*..*..*...*..*..*..*..*..*..*..* |
104000 ++-----------------------------------------------------------------+


[*] bisect-good sample
[O] bisect-bad sample

To reproduce:

apt-get install ruby
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/setup-local job.yaml # the job file attached in this email
bin/run-local job.yaml


Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


Thanks,
Fengguang

---
testcase: fsmark
default-monitors:
wait: pre-test
uptime:
iostat:
vmstat:
numa-numastat:
numa-vmstat:
numa-meminfo:
proc-vmstat:
proc-stat:
meminfo:
slabinfo:
interrupts:
lock_stat:
latency_stats:
softirqs:
bdi_dev_mapping:
diskstats:
nfsstat:
cpuidle:
cpufreq-stats:
turbostat:
pmeter:
sched_debug:
interval: 10
default_watchdogs:
watch-oom:
watchdog:
cpufreq_governor: performance
commit: 5721f7f0f14b682d2e86e9a4aa9025acaf69399d
model: Nehalem
nr_cpu: 8
memory: 4G
hdd_partitions: "/dev/disk/by-id/ata-WDC_WD1003FBYZ-010FB0_WD-WCAW36812041-part1"
swap_partitions: "/dev/disk/by-id/ata-WDC_WD1003FBYZ-010FB0_WD-WCAW36812041-part2"
rootfs_partition: "/dev/disk/by-id/ata-WDC_WD1003FBYZ-010FB0_WD-WCAW36812041-part3"
netconsole_port: 6649
iterations: 1x
nr_threads: 32t
disk: 1HDD
fs: ext4
fs2: nfsv4
fsmark:
filesize: 9B
test_size: 400M
sync_method: fsyncBeforeClose
nr_directories: 16d
nr_files_per_directory: 256fpd
testbox: nhm4
tbox_group: nhm4
kconfig: x86_64-rhel
enqueue_time: 2015-02-10 23:57:36.647136389 +08:00
head_commit: 5721f7f0f14b682d2e86e9a4aa9025acaf69399d
base_commit: bfa76d49576599a4b9f9b7a71f23d73d6dcff735
branch: next/master
kernel: "/kernel/x86_64-rhel/5721f7f0f14b682d2e86e9a4aa9025acaf69399d/vmlinuz-3.19.0-next-20150211-g5721f7f"
user: lkp
queue: cyclic
rootfs: debian-x86_64-2015-02-07.cgz
result_root: "/result/nhm4/fsmark/performance-1x-32t-1HDD-ext4-nfsv4-9B-400M-fsyncBeforeClose-16d-256fpd/debian-x86_64-2015-02-07.cgz/x86_64-rhel/5721f7f0f14b682d2e86e9a4aa9025acaf69399d/0"
job_file: "/lkp/scheduled/nhm4/cyclic_fsmark-performance-1x-32t-1HDD-ext4-nfsv4-9B-400M-fsyncBeforeClose-16d-256fpd-x86_64-rhel-HEAD-5721f7f0f14b682d2e86e9a4aa9025acaf69399d-0-20150210-12345-1obo8ps.yaml"
dequeue_time: 2015-02-11 16:52:17.667759731 +08:00
job_state: finished
loadavg: 29.23 38.04 33.63 1/155 27817
start_time: '1423644766'
end_time: '1423646499'
version: "/lkp/lkp/.src-20150211-114913"
mkfs -t ext4 -q -F /dev/sda1
mount -t ext4 /dev/sda1 /fs/sda1
/etc/init.d/rpcbind start
/etc/init.d/nfs-common start
/etc/init.d/nfs-kernel-server start
mount -t nfs -o vers=4 localhost:/fs/sda1 /nfs/sda1
./fs_mark -d /nfs/sda1/1 -d /nfs/sda1/2 -d /nfs/sda1/3 -d /nfs/sda1/4 -d /nfs/sda1/5 -d /nfs/sda1/6 -d /nfs/sda1/7 -d /nfs/sda1/8 -d /nfs/sda1/9 -d /nfs/sda1/10 -d /nfs/sda1/11 -d /nfs/sda1/12 -d /nfs/sda1/13 -d /nfs/sda1/14 -d /nfs/sda1/15 -d /nfs/sda1/16 -d /nfs/sda1/17 -d /nfs/sda1/18 -d /nfs/sda1/19 -d /nfs/sda1/20 -d /nfs/sda1/21 -d /nfs/sda1/22 -d /nfs/sda1/23 -d /nfs/sda1/24 -d /nfs/sda1/25 -d /nfs/sda1/26 -d /nfs/sda1/27 -d /nfs/sda1/28 -d /nfs/sda1/29 -d /nfs/sda1/30 -d /nfs/sda1/31 -d /nfs/sda1/32 -D 16 -N 256 -n 3200 -L 1 -S 1 -s 9
_______________________________________________
LKP mailing list
LKP@xxxxxxxxxxxxxxx