[BENCHMARK] fswide dirty bit for ext2
From: Valerie Henson
Date: Wed Mar 29 2006 - 01:55:36 EST
I am working on a file system wide dirty bit for ext2. This allows
you to skip a full fsck if you crash while the file system is not
being actively modified.
Zach Brown was kind enough to run a few benchmarks comparing various
versions of ext2 and ext3. Results:
ext2 ext2r *ext2fw* ext3 ext3wb
kuntar 17.86 19.59 17.58 21.10 30.60
postmark 6.41 6.57 8.48 30.87 15.47
tiobench 34.11 34.96 34.26 34.00 34.06
ext2: ext2: 4k blocks, noatime
ext2r: ext2: 4k blocks, noatime, reservations
ext2fw: ext2: 4k blocks, noatime, reservations, fswide
ext3: ext3: 4k blocks, 256m journal, noatime
ext3wb: ext3: 4k blocks, 256m journal, noatime, data=writeback
kuntar: expanding a cached uncompressed kernel tarball and syncing
postmark: postmark: numbers = 10000, transactions = 10000
tiobench: tiobench: 16 threads, 256m size
The summary is that ext2+fswide bit is the same as plain ext2 except
30% slower on postmark. Slower postmark is expected given the orphan
inode list requires at least two writes to either the superblock or
another inode per file removal. An obvious improvement would be
per-block group orphan inode lists, which would require a non-trivial
but not frightening patch to fsck. (This might also be ported to
ext3.) Other ideas?
I split out the ext2 reservations port into its own patch.
ext2+reservations alone is strangely slower than ext2+fswide on one
benchmark; I did some preliminary debugging but didn't find anything
obvious wrong as yet. The patches are available for anyone who wants
to track this down themselves before I get around to it.
Patches against 2.6.16-rc5-mm3 here:
Fswide bit (includes reservations):
http://infohost.nmt.edu/~val/patches/fswide_shorter_patch
Reservations only:
http://infohost.nmt.edu/~val/patches/resv_only_patch
-VAL
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/