Re: [LKP] [lkp] [xfs] 68a9f5e700: aim7.jobs-per-min -13.6% regression
From: Fengguang Wu
Date: Sun Aug 14 2016 - 19:57:58 EST
Hi Christoph,
On Sun, Aug 14, 2016 at 06:17:24PM +0200, Christoph Hellwig wrote:
Snipping the long contest:
I think there are three observations here:
(1) removing the mark_page_accessed (which is the only significant
change in the parent commit) hurts the
aim7/1BRD_48G-xfs-disk_rr-3000-performance/ivb44 test.
I'd still rather stick to the filemap version and let the
VM people sort it out. How do the numbers for this test
look for XFS vs say ext4 and btrfs?
We'll be able to compare between filesystems when the tests for Linus'
patch finish.
(2) lots of additional spinlock contention in the new case. A quick
check shows that I fat-fingered my rewrite so that we do
the xfs_inode_set_eofblocks_tag call now for the pure lookup
case, and pretty much all new cycles come from that.
(3) Boy, are those xfs_inode_set_eofblocks_tag calls expensive, and
we're already doing way to many even without my little bug above.
So I've force pushed a new version of the iomap-fixes branch with
(2) fixed, and also a little patch to xfs_inode_set_eofblocks_tag a
lot less expensive slotted in before that. Would be good to see
the numbers with that.
I just queued these jobs. The comment-out ones will be submitted as
the 2nd stage when the 1st-round quick tests finish.
queue=(
queue
-q vip
--repeat-to 3
fs=xfs
perf-profile.delay=1
-b hch-vfs/iomap-fixes
-k bf4dc6e4ecc2a3d042029319bc8cd4204c185610
-k 74a242ad94d13436a1644c0b4586700e39871491
-k 99091700659f4df965e138b38b4fa26a29b7eade
)
"${queue[@]}" -t ivb44 aim7-fs-1brd.yaml
"${queue[@]}" -t ivb44 fsmark-generic-1brd.yaml
"${queue[@]}" -t ivb43 fsmark-stress-journal-1brd.yaml
"${queue[@]}" -t lkp-hsx02 fsmark-generic-brd-raid.yaml
"${queue[@]}" -t lkp-hsw-ep4 fsmark-1ssd-nvme-small.yaml
#"${queue[@]}" -t ivb43 fsmark-stress-journal-1hdd.yaml
#"${queue[@]}" -t ivb44 dd-write-1hdd.yaml fsmark-generic-1hdd.yaml
Thanks,
Fengguang