Re: why is the size of a directory always 1024b ?

David Luyer (luyer@ucs.uwa.edu.au)
Fri, 25 Jun 1999 16:45:27 +0800


Alexander Viro wrote:
> Trimming the slack on the fly is royal PITA. Doing it upon request
> is trivial - cp -rl foo foo.old; rm -rf foo; mv foo.old foo; (or doing the
> same by hands).

A real example where this isn't possible;

move over from a large Digital Unix system to Linux.

/usr/users is a mount point and /usr/users/username are the names for the
home directories, because it all worked fine like that under Digital Unix.

discover you need a hashing scheme, ie, /usr/users/x/username or
/usr/users/x/x/username, since Linux doesn't handle massive directories
so well (ext2fs having linear directories and all).

move all of the user directories to the appropriate places (with
compatibility symlinks, then fix all the programs that depend on the old
structure, then remove the compatibility symlinks).

oh look, /usr/users is still big and horribly slow for some operations.
and the only way to fix it is to either build a second raid array of the
same size or to delete everything and restore from backups, since you need
to mkfs to clean up the mount point directory being bloated.

major downtime required to get decent performance.

Not good. There has to be a better way of doing this. Personally I'd prefer
a magic option to fsck to tell it to compact directories which are more than
X% wasted space before you mount them.

David.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/