On Wed, Jan 08, 2025 at 09:35:47AM -0800, Darrick J. Wong wrote:
On Wed, Jan 08, 2025 at 03:43:04PM +0800, Chi Zhiling wrote:
On 2025/1/7 20:13, Amir Goldstein wrote:
Dave's answer to this question was that there are some legacy applications
(database applications IIRC) on production systems that do rely on the fact
that xfs provides this semantics and on the prerequisite that they run on xfs.
However, it was noted that:
1. Those application do not require atomicity for any size of IO, they
typically work in I/O size that is larger than block size (e.g. 16K or 64K)
and they only require no torn writes for this I/O size
2. Large folios and iomap can usually provide this semantics via folio lock,
but application has currently no way of knowing if the semantics are
provided or not
To be honest, it would be best if the folio lock could provide such
semantics, as it would not cause any potential problems for the
application, and we have hope to achieve concurrent writes.
However, I am not sure if this is easy to implement and will not cause
other problems.
Assuming we're not abandoning POSIX "Thread Interactions with Regular
File Operations", you can't use the folio lock for coordination, for
several reasons:
a) Apps can't directly control the size of the folio in the page cache
b) The folio size can (theoretically) change underneath the program at
any time (reclaim can take your large folio and the next read gets a
smaller folio)
c) If your write crosses folios, you've just crossed a synchronization
boundary and all bets are off, though all the other filesystems behave
this way and there seem not to be complaints
d) If you try to "guarantee" folio granularity by messing with min/max
folio size, you run the risk of ENOMEM if the base pages get fragmented
I think that's why Dave suggested range locks as the correct solution to
this; though it is a pity that so far nobody has come up with a
performant implementation.
Yes, that's a fair summary of the situation.
That said, I just had a left-field idea for a quasi-range lock
that may allow random writes to run concurrently and atomically
with reads.
Essentially, we add an unsigned long to the inode, and use it as a
lock bitmap. That gives up to 64 "lock segments" for the buffered
write. We may also need a "segment size" variable....
The existing i_rwsem gets taken shared unless it is an extending
write.
For a non-extending write, we then do an offset->segment translation
and lock that bit in the bit mask. If it's already locked, we wait
on the lock bit. i.e. shared IOLOCK, exclusive write bit lock.
The segments are evenly sized - say a minimum of 64kB each, but when
EOF is extended or truncated (which is done with the i_rwsem held
exclusive) the segment size is rescaled. As nothing can hold bit
locks while the i_rwsem is held exclusive, this will not race with
anything.
If we are doing an extending write, we take the i_rwsem shared
first, then check if the extension will rescale the locks. If lock
rescaling is needed, we have to take the i_rwsem exclusive to do the
EOF extension. Otherwise, the bit lock that covers EOF will
serialise file extensions so it can be done under a shared i_rwsem
safely.
This will allow buffered writes to remain atomic w.r.t. each other,
and potentially allow buffered reads to wait on writes to the same
segment and so potentially provide buffered read vs buffered write
atomicity as well.
If we need more concurrency than an unsigned long worth of bits for
buffered writes, then maybe we can enlarge the bitmap further.
I suspect this can be extended to direct IO in a similar way to
buffered reads, and that then opens up the possibility of truncate
and fallocate() being able to use the bitmap for range exclusion,
too.
The overhead is likely minimal - setting and clearing bits in a
bitmap, as opposed to tracking ranges in a tree structure....
Thoughts?