On Tuesday 25 August 2009 18:40:50 Ric Wheeler wrote:
Repeat experiment until you get up to something like google scale or theOn google scale anvil lightning can fry your machine out of a clear sky.
other papers on failures in national labs in the US and then we can have an
informed discussion.
However, there are still a few non-enterprise users out there, and knowing
that specific usage patterns don't behave like they expect might be useful to
them.
Actually, that's _exactly_ what he's talking about.I lost a s-ata drive 24 hours after installing it in a new box. If I hadI can promise you that hot unplugging and replugging a S-ATA drive willI can promise you that running S-ATA drive will also lose you data,
also lose you data if you are actively writing to it (ext2, 3,
whatever).
even if you are not actively writing to it. Just wait 10 years; so
what is your point?
MD5 RAID5, I would not have lost any.
My point is that you fail to take into account the rate of failures of a
given configuration and the probability of data loss given those rates.
When writing to a degraded raid or a flash disk, journaling is essentially
useless. If you get a power failure, kernel panic, somebody tripping over a
USB cable, and so on, your filesystem will not be protected by journaling.
Your data won't be trashed _every_ time, but the likelihood is much greater
than experience with journaling in other contexts would suggest.
Worse, the journaling may be counterproductive by _hiding_ many errors that
fsck would promptly detect, so when the error is detected it may not be
associated with the event that caused it. It also may not be noticed until
good backups of the data have been overwritten or otherwise cycled out.
You seem to be arguing that Linux is no longer used anywhere but the
enterprise, so issues affecting USB flash keys or cheap software-only RAID
aren't worth documenting?
Rob