[PATCH] io_uring: fix list corruption race in io_pollfree_wake()

From: Soham Kute

Date: Thu Feb 12 2026 - 06:56:05 EST


io_pollfree_wake() removes the poll wait entry without holding
the waitqueue head lock. Other removal paths take the head lock,
so this can race and lead to list corruption detected by list_debug.

Acquire the waitqueue lock before calling io_poll_remove_waitq(),
matching the locking used in io_poll_remove_entry().

Reported-by: syzbot+ab12f0c08dd7ab8d057c@xxxxxxxxxxxxxxxxxxxxxxxxx
Closes: https://syzkaller.appspot.com/bug?extid=ab12f0c08dd7ab8d057c
Signed-off-by: Soham Kute <officialsohamkute@xxxxxxxxx>
---
io_uring/poll.c | 9 ++++++++-
1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/io_uring/poll.c b/io_uring/poll.c
index aac4b3b88..006154355 100644
--- a/io_uring/poll.c
+++ b/io_uring/poll.c
@@ -383,10 +383,17 @@ static void io_poll_cancel_req(struct io_kiocb *req)

static __cold int io_pollfree_wake(struct io_kiocb *req, struct io_poll *poll)
{
+ struct wait_queue_head *head;
io_poll_mark_cancelled(req);
/* we have to kick tw in case it's not already */
io_poll_execute(req, 0);
- io_poll_remove_waitq(poll);
+ /* Pairs with smp_store_release() in io_poll_remove_waitq() */
+ head = smp_load_acquire(&poll->head);
+ if (head) {
+ spin_lock_irq(&head->lock);
+ io_poll_remove_waitq(poll);
+ spin_unlock_irq(&head->lock);
+ }
return 1;
}

--
2.34.1