-----Original Message-----As long as no object smaller than the disk block size is ever
From: mikefedyk@xxxxxxxxx [mailto:mikefedyk@xxxxxxxxx] On
Behalf Of Mike Fedyk
Sent: Wednesday, June 23, 2010 9:51 PM
To: Daniel Taylor
Cc: Daniel J Blueman; Mat; LKML;
linux-fsdevel@xxxxxxxxxxxxxxx; Chris Mason; Ric Wheeler;
Andrew Morton; Linus Torvalds; The development of BTRFS
Subject: Re: Btrfs: broken file system design (was Unbound(?)
internal fragmentation in Btrfs)
On Wed, Jun 23, 2010 at 8:43 PM, Daniel Taylor
<Daniel.Taylor@xxxxxxx> wrote:
Just an FYI reminder. The original test (2K files) is utterlyBlock size = 4k
pathological for disk drives with 4K physical sectors, such as
those now shipping from WD, Seagate, and others. Some of the
SSDs have larger (16K0 or smaller blocks (2K). There is also
the issue of btrfs over RAID (which I know is not entirely
sensible, but which will happen).
The absolute minimum allocation size for data should be the same
as, and aligned with, the underlying disk block size. If that
results in underutilization, I think that's a good thing for
performance, compared to read-modify-write cycles to update
partial disk blocks.
Btrfs packs smaller objects into the blocks in certain cases.
flushed to media, and all flushed objects are aligned to the disk
blocks, there should be no real performance hit from that.
Otherwise we end up with the damage for the ext[234] family, where
the file blocks can be aligned, but the 1K inode updates cause
the read-modify-write (RMW) cycles and and cost>10% performance
hit for creation/update of large numbers of files.
An RMW cycle costs at least a full rotation (11 msec on a 5400 RPM
drive), which is painful.