Ric Wheeler wrote:Unfortunately not - if you mount a file system with write cache enabled and see "barriers disabled" messages in /var/log/messages, this is exactly what happens.
[]
The easiest way to lose your data in Linux - with RAID, without RAID, S-ATA or SAS - is to run with the write cache enabled.
If you compare the size of even a large RAID stripe it will be measured in KB and as this thread has mentioned already, you stand to have damage to just one stripe (or even just a disk sector or two).
If you lose power with the write caches enabled on that same 5 drive RAID set, you could lose as much as 5 * 32MB of freshly written data on a power loss (16-32MB write caches are common on s-ata disks these days).
This is fundamentally wrong. Many filesystems today use either barriers
or flushes (if barriers are not supported), and the times when disk drives
were lying to the OS that the cache got flushed are long gone.
For MD5 (and MD6), you really must run with the write cache disabled until we get barriers to work for those configurations.
I highly doubt barriers will ever be supported on anything but simple
raid1, because it's impossible to guarantee ordering across multiple
drives. Well, it *is* possible to have write barriers with journalled
(and/or with battery-backed-cache) raid[456].
Note that even if raid[456] does not support barriers, write cache
flushes still works.
/mjt