Re: what is our answer to ZFS?
From: Pavel Machek
Date: Tue Nov 22 2005 - 14:07:09 EST
Hi!
> > > Sun is proposing it can predict what storage layout will be efficient for
> > > as yet unheard of quantities of data, with unknown access patterns, at
> > > least a couple decades from now. It's also proposing that data
> > > compression and checksumming are the filesystem's job. Hands up anybody
> > > who spots conflicting trends here already? Who thinks the 128 bit
> > > requirement came from marketing rather than the engineers?
> >
> > Actually, if you are storing information in single protons, I'd say
> > you _need_ checksumming :-).
>
> You need error correcting codes at the media level. A molecular storage
> system like this would probably look a lot more like flash or dram than it
> would magnetic media. (For one thing, I/O bandwidth and seek times become a
> serious bottleneck with high density single point of access systems.)
>
> > [I actually agree with Sun here, not trusting disk is good idea. At
> > least you know kernel panic/oops/etc can't be caused by bit corruption on
> > the disk.]
>
> But who said the filesystem was the right level to do this at?
Filesystem level may not be the best level to do it at, but doing it
at all is still better than current state-of-the-art. Doing it at
media level is not enough, because then you get interference at IDE
cable or driver bugs etc.
DM layer might be better place to do checksums at, but perhaps
filesystem can do it more efficiently (it knows its own access
patterns), and is definitely easier to setup for the end user.
If you want compression anyway (and you want -- for performance
reasons, if you are working with big texts or geographical data),
doing checksums at the same level just makes sense.
Pavel
--
Thanks, Sharp!
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/