...Repeat experiment until you get up to something like google scale or theOn google scale anvil lightning can fry your machine out of a clear sky.
other papers on failures in national labs in the US and then we can have an
informed discussion.
However, there are still a few non-enterprise users out there, and knowing
that specific usage patterns don't behave like they expect might be useful to
them.
You are missing the broader point of both papers. They (and people like
me when back at EMC) look at large numbers of machines and try to fix
what actually breaks when run in the real world and causes data loss.
The motherboards, S-ATA controllers, disk types are the same class of
parts that I have in my desktop box today.
These errors happen extremely commonly and are what RAID deals with well.
What does not happen commonly is that during the RAID rebuild (kicked
off only after a drive is kicked out), you push the power button or have
a second failure (power outage).
We will have more users loose data if they decide to use ext2 instead of
ext3 and use only single disk storage.
So your argument basically is
'our abs brakes are broken, but lets not tell anyone; our car is still
safer than a horse'.
and
'while we know our abs brakes are broken, they are not major factor in
accidents, so lets not tell anyone'.
Sorry, but I'd expect slightly higher moral standards. If we can
document it in a way that's non-scary, and does not push people to
single disks (horses), please go ahead; but you have to mention that
md raid breaks journalling assumptions (our abs brakes really are
broken).
Pavel