Re: Filesystem optimization..

tytso@mit.edu
Mon, 5 Jan 1998 15:26:43 -0500


From: "Michael O'Reilly" <michael@metal.iinet.net.au>
Date: 01 Jan 1998 09:34:23 +0800

> Check out the Usenix Proceedings from the January, 1997 conference;
> there's an article about doing something very much like your idea.

Any idea when unenix will allow you to buy their proceedings on-line? :)

If you're a USENIX member, you can download recent papers from the
proceedings from their web page.

> While I was there, I toyed with trying to add a similar (although not
> quite as ambitious as was described in their paper) to ext2fs. That is,
> instead of eliminating the inode table altogether, put a copy of the
> inode in the directory, but leave a forwarding pointer in the inode
> table for programs like e2fsck and dump which need to scan the inode
> table.

That did cross my mind. Without the redudancy of the inode used
bitmap, and the inodes in known positions, it's a little harder to
detect inconsistencies...

It's actually darned new impossible. One of the reasons why the ext2
filesystem is so robust against errors, and why e2fsck can do a good job
recovering files, is because we always know where to find all of the
inodes. If you store the inode information in the directories, and the
root directory gets smashed, you can say good bye to basically all of
your data.

A human being might be able to deduce that a certain block was a
directory, and then work forwards and backwards to recover the some or
all of the filesystem, but it would require an expert, and it's not the
sort of work that could be automated.

What was that again about basing the filesystem on ext2 so it would have
all of the robustness of the original filesystem? It won't work that
way at all, I'm afraid.

- Ted