Re: [PATCH v7 7/8] fsdax: Introduce dax_iomap_ops for end of reflink

From: Shiyang Ruan
Date: Thu Aug 26 2021 - 23:30:09 EST




On 2021/8/20 23:18, Dan Williams wrote:
On Thu, Aug 19, 2021 at 11:13 PM ruansy.fnst <ruansy.fnst@xxxxxxxxxxx> wrote:



On 2021/8/20 上午11:01, Dan Williams wrote:
On Sun, Aug 15, 2021 at 11:05 PM Shiyang Ruan <ruansy.fnst@xxxxxxxxxxx> wrote:

After writing data, reflink requires end operations to remap those new
allocated extents. The current ->iomap_end() ignores the error code
returned from ->actor(), so we introduce this dax_iomap_ops and change
the dax_iomap_*() interfaces to do this job.

- the dax_iomap_ops contains the original struct iomap_ops and fsdax
specific ->actor_end(), which is for the end operations of reflink
- also introduce dax specific zero_range, truncate_page
- create new dax_iomap_ops for ext2 and ext4

Signed-off-by: Shiyang Ruan <ruansy.fnst@xxxxxxxxxxx>
---
fs/dax.c | 68 +++++++++++++++++++++++++++++++++++++-----
fs/ext2/ext2.h | 3 ++
fs/ext2/file.c | 6 ++--
fs/ext2/inode.c | 11 +++++--
fs/ext4/ext4.h | 3 ++
fs/ext4/file.c | 6 ++--
fs/ext4/inode.c | 13 ++++++--
fs/iomap/buffered-io.c | 3 +-
fs/xfs/xfs_bmap_util.c | 3 +-
fs/xfs/xfs_file.c | 8 ++---
fs/xfs/xfs_iomap.c | 36 +++++++++++++++++++++-
fs/xfs/xfs_iomap.h | 33 ++++++++++++++++++++
fs/xfs/xfs_iops.c | 7 ++---
fs/xfs/xfs_reflink.c | 3 +-
include/linux/dax.h | 21 ++++++++++---
include/linux/iomap.h | 1 +
16 files changed, 189 insertions(+), 36 deletions(-)

diff --git a/fs/dax.c b/fs/dax.c
index 74dd918cff1f..0e0536765a7e 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -1348,11 +1348,30 @@ static loff_t dax_iomap_iter(const struct iomap_iter *iomi,
return done ? done : ret;
}

+static inline int
+__dax_iomap_iter(struct iomap_iter *iter, const struct dax_iomap_ops *ops)
+{
+ int ret;
+
+ /*
+ * Call dax_iomap_ops->actor_end() before iomap_ops->iomap_end() in
+ * each iteration.
+ */
+ if (iter->iomap.length && ops->actor_end) {
+ ret = ops->actor_end(iter->inode, iter->pos, iter->len,
+ iter->processed);
+ if (ret < 0)
+ return ret;
+ }
+
+ return iomap_iter(iter, &ops->iomap_ops);

This reorganization looks needlessly noisy. Why not require the
iomap_end operation to perform the actor_end work. I.e. why can't
xfs_dax_write_iomap_actor_end() just be the passed in iomap_end? I am
not seeing where the ->iomap_end() result is ignored?


The V6 patch[1] was did in this way.
[1]https://lore.kernel.org/linux-xfs/20210526005159.GF202144@locust/T/#m79a66a928da2d089e2458c1a97c0516dbfde2f7f

But Darrick reminded me that ->iomap_end() will always take zero or
positive 'written' because iomap_apply() handles this argument.

```
if (ops->iomap_end) {
ret = ops->iomap_end(inode, pos, length,
written > 0 ? written : 0,
flags, &iomap);
}
```

So, we cannot get actual return code from CoW in ->actor(), and as a
result, we cannot handle the xfs end_cow correctly in ->iomap_end().
That's where the result of CoW was ignored.

Ah, thank you for the explanation.

However, this still seems like too much code thrash just to get back
to the original value of iter->processed. I notice you are talking
about iomap_apply(), but that routine is now gone in Darrick's latest
iomap-for-next branch. Instead iomap_iter() does this:

if (iter->iomap.length && ops->iomap_end) {
ret = ops->iomap_end(iter->inode, iter->pos, iomap_length(iter),
iter->processed > 0 ? iter->processed : 0,

As you can see, here is the same logic as the old iomap_apply(): the negative iter->processed won't be passed into ->iomap_end().

iter->flags, &iter->iomap);
if (ret < 0 && !iter->processed)
return ret;
}


I notice that the @iomap argument to ->iomap_end() is reliably coming
from @iter. So you could do the following in your iomap_end()
callback:

struct iomap_iter *iter = container_of(iomap, typeof(*iter), iomap);
struct xfs_inode *ip = XFS_I(inode);
ssize_t written = iter->processed;

The written will be 0 or positive. The original error code is ingnored.

bool cow = xfs_is_cow_inode(ip);

if (cow) {
if (written <= 0)
xfs_reflink_cancel_cow_range(ip, pos, length, true)
}


--
Thanks,
Ruan.