Yes. It does not need to be slow if ordered writes are added to the
kernel.
> By safe, I mean that I could hit reset in the middle of 50 parallel
> un-tars and reboot the system and the file system comes up clean (no fsck,
> but data loss)?
In this specific case (ie, you want to preserve the file system
integrity, but do not care if you loose the information), adding
metadata logging to the ext2 file system would be the easier thing to
do.
The only requirement is that the metadata logging information should
reach the disk before the actual metadata changes. Currently this is
not supported by the Linux kernel, but the new driver structure from
Thomas should provide a good framework for doing this.
If you do not mind having a slow file system, adding this would not
require the new driver framework, you just need to force a syncrounous
write of the metadata log block to the disk before marking the
actual modified metadata block dirty in ext2.
> Has anyone thought about this very much? If so, is there a mailing list or
> archive that I can browse?
I have been thinking about implement this for some time now, and have
read some bits about this, and I even was considering implementing the
slow approach yesterday (ie, not depending on Thomas' new driver
framework), but had to leave early the office.
Cheers,
Miguel.