Re: [PATCH 12/12] io_uring: support true async buffered reads, if file provides it
From: Jens Axboe
Date: Tue May 26 2020 - 09:47:25 EST
On 5/26/20 1:38 AM, Pavel Begunkov wrote:
> On 23/05/2020 21:57, Jens Axboe wrote:
>> If the file is flagged with FMODE_BUF_RASYNC, then we don't have to punt
>> the buffered read to an io-wq worker. Instead we can rely on page
>> unlocking callbacks to support retry based async IO. This is a lot more
>> efficient than doing async thread offload.
>>
>> The retry is done similarly to how we handle poll based retry. From
>> the unlock callback, we simply queue the retry to a task_work based
>> handler.
>>
>> Signed-off-by: Jens Axboe <axboe@xxxxxxxxx>
>> ---
>> fs/io_uring.c | 99 +++++++++++++++++++++++++++++++++++++++++++++++++++
>> 1 file changed, 99 insertions(+)
>>
>> diff --git a/fs/io_uring.c b/fs/io_uring.c
>> index e95481c552ff..dd532d2634c2 100644
>> --- a/fs/io_uring.c
>> +++ b/fs/io_uring.c
>> @@ -498,6 +498,8 @@ struct io_async_rw {
>> struct iovec *iov;
>> ssize_t nr_segs;
>> ssize_t size;
>> + struct wait_page_queue wpq;
>> + struct callback_head task_work;
>> };
>>
>> struct io_async_ctx {
>> @@ -2568,6 +2570,99 @@ static int io_read_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe,
>> return 0;
>> }
>>
>> +static void io_async_buf_cancel(struct callback_head *cb)
>> +{
>> + struct io_async_rw *rw;
>> + struct io_ring_ctx *ctx;
>> + struct io_kiocb *req;
>> +
>> + rw = container_of(cb, struct io_async_rw, task_work);
>> + req = rw->wpq.wait.private;
>> + ctx = req->ctx;
>> +
>> + spin_lock_irq(&ctx->completion_lock);
>> + io_cqring_fill_event(req, -ECANCELED);
>
> It seems like it should go through kiocb_done()/io_complete_rw_common().
> My concern is missing io_put_kbuf().
Yeah, I noticed that too after sending it out. If you look at the
current one that I updated yesterday, it does add that (and also
renames the iter read helper):
https://git.kernel.dk/cgit/linux-block/commit/?h=async-buffered.5&id=6f4e3a4066d0db3e3478e58cc250afb16d8d4d91
--
Jens Axboe