On Wed 08-01-25 11:43:08, Baokun Li wrote:This is not quite right. Regardless of whether it is a BIO write or a DIO
On 2025/1/6 22:32, Jan Kara wrote:So I agree that direct IO users will generally notice the IO error so the
Okay, I will update the semantics of data_err=abort in the next version.But as you said, we don't track overwrite writes for performance reasons.I agree it makes sense to make the semantics of data_err=abort more
But compared to the poor performance of journal_data and the risk of the
drop cache exposing stale, not being able to sense data errors on overwrite
writes is acceptable.
After enabling ‘data_err=abort’ in dioread_nolock mode, after drop_cache
or remount, the user will not see the unexpected all-zero data in the
unwritten area, but rather the earlier consistent data, and the data in
the file is trustworthy, at the cost of some trailing data.
On the other hand, adding a new written extents and converting an
unwritten extents to written both expose the data to the user, so the user
is concerned about whether the data is correct at that point.
In general, I think we can update the semantics of “data_err=abort” to,
“Abort the journal if the file fails to write back data on extended writes
in ORDERED mode”. Do you have any thoughts on this?
obvious. Based on the usecase you've described - i.e., rather take the
filesystem down on write IO error than risk returning old data later - it
would make sense to me to also do this on direct IO writes.
For direct I/O writes, I think we don't need it because users can
perceive errors in time.
chances for bugs due to missing the IO error is low. But I think the
question is really the other way around: Is there a good reason to make
direct IO writes different? Because if I as a sysadmin want to secure a
system from IO error handling bugs, then having to think whether some
application uses direct IO or not is another nuissance. Why should I be
bothered?
I see your point. I concur that it is indeed meaningful forWell, they don't care about data consistency after a crash. But theyAlso I would doFor data=journal mode, the journal itself will abort when data is abnormal.
this regardless of data=writeback/ordered/journalled mode because although
users wanting data_err=abort behavior will also likely want the guarantees
of data=ordered mode, these are two different things
However, as you pointed out, the above bug may cause errors to be missed.
Therefore, we can perform this check by default for journaled files.
and I can imagine useUsers using data=writeback often do not care about data consistency.
cases for setups with data=writeback and data_err=abort as well (e.g. for
scratch filesystems which get recreated on each system startup).
I did not understand your example. Could you please explain it in detail?
usually do care about data consistency while the system is running. And
unhandled IO errors can lead to data consistency problems without crashing
the system (for example if writeback fails and page gets evicted from
memory later, you have lost the new data and may see old version of it).
And I see data_err=abort as a way to say: "I don't trust my applications to
handle IO errors well. Rather take the filesystem down in that case than
risk data consistency issues".
Honza