Re: [PATCH 08/31] aio: implement IOCB_CMD_POLL

From: Al Viro
Date: Tue May 22 2018 - 17:11:35 EST


On Tue, May 22, 2018 at 01:30:45PM +0200, Christoph Hellwig wrote:

> +static inline void __aio_poll_complete(struct poll_iocb *req, __poll_t mask)
> +{
> + struct aio_kiocb *iocb = container_of(req, struct aio_kiocb, poll);
> +
> + fput(req->file);
> + aio_complete(iocb, mangle_poll(mask), 0);
> +}

Careful.

> +static int aio_poll_cancel(struct kiocb *iocb)
> +{
> + struct aio_kiocb *aiocb = container_of(iocb, struct aio_kiocb, rw);
> + struct poll_iocb *req = &aiocb->poll;
> + struct wait_queue_head *head = req->head;
> + bool found = false;
> +
> + spin_lock(&head->lock);
> + found = __aio_poll_remove(req);
> + spin_unlock(&head->lock);

What's to guarantee that req->head has not been freed by that point?
Look: wakeup finds ->ctx_lock held, so it leaves the sucker on the
list, removes it from queue and schedules the call of __aio_poll_complete().
Which gets executed just as we hit aio_poll_cancel(), starting with fput().

You really want to do aio_complete() before fput(). That way you know that
req->wait is alive and well at least until iocb gets removed from the list.

> + req->events = demangle_poll(iocb->aio_buf) | POLLERR | POLLHUP;

EPOLLERR | EPOLLHUP, please. The values are equal to POLLERR and POLLHUP on
all architectures, but let's avoid misannotations.

> + spin_lock_irq(&ctx->ctx_lock);
> + list_add_tail(&aiocb->ki_list, &ctx->active_reqs);
> +
> + spin_lock(&req->head->lock);
> + mask = req->file->f_op->poll_mask(req->file, req->events);
> + if (!mask)
> + __add_wait_queue(req->head, &req->wait);

ITYM
if (!mask) {
__add_wait_queue(req->head, &req->wait);
list_add_tail(&aiocb->ki_list, &ctx->active_reqs);
}
What's the point of exposing it to aio_poll_cancel() when it has
never been on waitqueue?