Re: [PATCH 0/4] (RESEND) ext3 barrier changes
From: Jamie Lokier
Date: Fri May 16 2008 - 17:45:37 EST
Andrew Morton wrote:
> > I suppose alternately I could send another patch to remove "remember
> > that ext3/4 by default offers higher data integrity guarantees than
> > most." from Documentation/filesystems/ext4.txt ;)
> We could add a big scary printk at mount time and provide a document?
Can I suggest making /proc/mounts say "barrier=0" when journal is not
enabled, instead of omitting the option.
Boot logs are too large to pay close attention to unless it's really
obvious. (2.4 kernels _do_ have a similar message about "data
integrity not guaranteed" with USB drivers - I never understood what
it was getting it, and why it was removed for 2.6).
However, if I saw barrier=0 in /proc/mounts it would at least prompt
me to look it up and then making an informed decision.
Personally I had assumed barriers were enabled by default with ext3,
as some distros do that, the 2.4 patches did that, and:
I *have* experienced corruption following power loss without
barriers, and none with barriers.
When I mentioned that turning off write cache or using barriers is
a solution to a programmer working on the same project, she said
"oh, yes, we've had reports of disk corruption too - thanks for the
advice", and the advice worked, so I am not the only one.
(In the interests of perspective, that's with ext3 on patched 2.4
kernels on a ARM device, but still - the barriers seem to work).
On a related note, there is advice floating about the net to run with
IDE write cache turned off if you're running a database and care about
integrity. That has much worse performance than barriers.
I guess the patch which fixes fsync is particularly useful for those
database users, as it means they can run with write cache enabled and
depend on fsync() to give equivalent integrity now. (Enabling
journalling is not really relevant to this).
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/