[PATCH v2.5 0/3] mm/fs: Remove unnecessary waiting for stable pages
From: Darrick J. Wong
Date: Fri Jan 18 2013 - 20:14:35 EST
Hi all,
This patchset ("stable page writes, part 2") makes some key modifications to
the original 'stable page writes' patchset. First, it provides creators
(devices and filesystems) of a backing_dev_info a flag that declares whether or
not it is necessary to ensure that page contents cannot change during writeout.
It is no longer assumed that this is true of all devices (which was never true
anyway). Second, the flag is used to relaxed the wait_on_page_writeback calls
so that wait only occurs if the device needs it. Third, it fixes up the
remaining disk-backed filesystems to use this improved conditional-wait logic
to provide stable page writes on those filesystems.
It is hoped that (for people not using checksumming devices, anyway) this
patchset will give back unnecessary performance decreases since the original
stable page write patchset went into 3.0. Sorry about not fixing it sooner.
Complaints were registered by several people about the long write latencies
introduced by the original stable page write patchset. Generally speaking, the
kernel ought to allocate as little extra memory as possible to facilitate
writeout, but for people who simply cannot wait, a second page stability
strategy is (re)introduced: snapshotting page contents. The waiting behavior
is still the default strategy; to enable page snapshotting, a superblock flag
(MS_SNAP_STABLE) must be set. This flag is used to bandaid^Henable stable page
writeback on ext3[1], and is not used anywhere else.
Given that there are already a few storage devices and network FSes that have
rolled their own page stability wait/page snapshot code, it would be nice to
move towards consolidating all of these. It seems possible that iscsi and
raid5 may wish to use the new stable page write support to enable zero-copy
writeout.
Thank you to Jan Kara for helping fix a couple more filesystems.
Per Andrew Morton's request, here are the result of using dbench to measure
latencies on ext2:
3.8.0-rc3:
Operation Count AvgLat MaxLat
----------------------------------------
WriteX 109347 0.028 59.817
ReadX 347180 0.004 3.391
Flush 15514 29.828 287.283
Throughput 57.429 MB/sec 4 clients 4 procs max_latency=287.290 ms
3.8.0-rc3 + patches:
WriteX 105556 0.029 4.273
ReadX 335004 0.005 4.112
Flush 14982 30.540 298.634
Throughput 55.4496 MB/sec 4 clients 4 procs max_latency=298.650 ms
As you can see, for ext2 the maximum write latency decreases from ~60ms on a
laptop hard disk to ~4ms. I'm not sure why the flush latencies increase,
though I suspect that being able to dirty pages faster gives the flusher more
work to do.
On ext4, the average write latency decreases as well as all the maximum
latencies:
3.8.0-rc3:
WriteX 85624 0.152 33.078
ReadX 272090 0.010 61.210
Flush 12129 36.219 168.260
Throughput 44.8618 MB/sec 4 clients 4 procs max_latency=168.276 ms
3.8.0-rc3 + patches:
WriteX 86082 0.141 30.928
ReadX 273358 0.010 36.124
Flush 12214 34.800 165.689
Throughput 44.9941 MB/sec 4 clients 4 procs max_latency=165.722 ms
XFS seems to exhibit similar latency improvements as ext2:
3.8.0-rc3:
WriteX 125739 0.028 104.343
ReadX 399070 0.005 4.115
Flush 17851 25.004 131.390
Throughput 66.0024 MB/sec 4 clients 4 procs max_latency=131.406 ms
3.8.0-rc3 + patches:
WriteX 123529 0.028 6.299
ReadX 392434 0.005 4.287
Flush 17549 25.120 188.687
Throughput 64.9113 MB/sec 4 clients 4 procs max_latency=188.704 ms
...and btrfs, just to round things out, also shows some latency decreases:
3.8.0-rc3:
WriteX 67122 0.083 82.355
ReadX 212719 0.005 2.828
Flush 9547 47.561 147.418
Throughput 35.3391 MB/sec 4 clients 4 procs max_latency=147.433 ms
3.8.0-rc3 + patches:
WriteX 64898 0.101 71.631
ReadX 206673 0.005 7.123
Flush 9190 47.963 219.034
Throughput 34.0795 MB/sec 4 clients 4 procs max_latency=219.044 ms
Before this patchset, all filesystems would block, regardless of whether or not
it was necessary. ext3 would wait, but still generate occasional checksum
errors. The network filesystems were left to do their own thing, so they'd
wait too.
After this patchset, all the disk filesystems except ext3 and btrfs will wait
only if the hardware requires it. ext3 (if necessary) snapshots pages instead
of blocking, and btrfs provides its own bdi so the mm will never wait. Network
filesystems haven't been touched, so either they provide their own wait code,
or they don't block at all. The blocking behavior is back to what it was
before 3.0 if you don't have a disk requiring stable page writes.
This patchset has been tested on 3.8.0-rc3 on x64 with ext3, ext4, and xfs.
I've spot-checked 3.8.0-rc4 and seem to be getting the same results as -rc3.
What does everyone think about queueing this for 3.9?
--D
[1] The alternative fixes to ext3 include fixing the locking order and page bit
handling like we did for ext4 (but then why not just use ext4?), or setting
PG_writeback so early that ext3 becomes extremely slow. I tried that, but the
number of write()s I could initiate dropped by nearly an order of magnitude.
That was a bit much even for the author of the stable page series! :)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/