aio_read seems to trigger the problem, but there is a lot of buffering going on that I don't understand
On 2022-11-23 18:51, Chuck Lever III wrote:
My guess is that one has to look very hard at qcow2 handling in qemu...
On Nov 23, 2022, at 12:49 PM, Benjamin Coddington <bcodding@xxxxxxxxxx> wrote:
On 23 Nov 2022, at 5:08, Anders Blomdell wrote:
Our problems turned out to be a fallout of Al Viros's splice rework, where nfsd reads with non-zero offsets and not ending
on a page boundary failed to remap the last page. I belive that this is a decent fix for that problem (tested on v6.1-rc6,
6.0.7 and 6.0.9)
---- a/fs/nfsd/vfs.c
+++ b/fs/nfsd/vfs.c
@@ -873,7 +873,7 @@ nfsd_splice_actor(struct pipe_inode_info *pipe, struct pipe_buffer *buf,
unsigned offset = buf->offset;
page += offset / PAGE_SIZE;
- for (int i = sd->len; i > 0; i -= PAGE_SIZE)
+ for (int i = sd->len + offset % PAGE_SIZE; i > 0; i -= PAGE_SIZE)
svc_rqst_replace_page(rqstp, page++);
if (rqstp->rq_res.page_len == 0) // first call
rqstp->rq_res.page_base = offset % PAGE_SIZE;
Does anyone have insight into how we could possibly have caught this in testing?
Was also wondering this. I had though fstests (via fsx) would have exercised
this usage scenario.