Re: [PATCH 12/12] io_uring: support true async buffered reads, if file provides it

From: Pavel Begunkov
Date: Tue May 26 2020 - 03:39:59 EST


On 23/05/2020 21:57, Jens Axboe wrote:
> If the file is flagged with FMODE_BUF_RASYNC, then we don't have to punt
> the buffered read to an io-wq worker. Instead we can rely on page
> unlocking callbacks to support retry based async IO. This is a lot more
> efficient than doing async thread offload.
>
> The retry is done similarly to how we handle poll based retry. From
> the unlock callback, we simply queue the retry to a task_work based
> handler.
>
> Signed-off-by: Jens Axboe <axboe@xxxxxxxxx>
> ---
> fs/io_uring.c | 99 +++++++++++++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 99 insertions(+)
>
> diff --git a/fs/io_uring.c b/fs/io_uring.c
> index e95481c552ff..dd532d2634c2 100644
> --- a/fs/io_uring.c
> +++ b/fs/io_uring.c
> @@ -498,6 +498,8 @@ struct io_async_rw {
> struct iovec *iov;
> ssize_t nr_segs;
> ssize_t size;
> + struct wait_page_queue wpq;
> + struct callback_head task_work;
> };
>
> struct io_async_ctx {
> @@ -2568,6 +2570,99 @@ static int io_read_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe,
> return 0;
> }
>
> +static void io_async_buf_cancel(struct callback_head *cb)
> +{
> + struct io_async_rw *rw;
> + struct io_ring_ctx *ctx;
> + struct io_kiocb *req;
> +
> + rw = container_of(cb, struct io_async_rw, task_work);
> + req = rw->wpq.wait.private;
> + ctx = req->ctx;
> +
> + spin_lock_irq(&ctx->completion_lock);
> + io_cqring_fill_event(req, -ECANCELED);

It seems like it should go through kiocb_done()/io_complete_rw_common().
My concern is missing io_put_kbuf().

> + io_commit_cqring(ctx);
> + spin_unlock_irq(&ctx->completion_lock);
> +
> + io_cqring_ev_posted(ctx);
> + req_set_fail_links(req);
> + io_double_put_req(req);
> +}


--
Pavel Begunkov