Re: [PATCH v1] io_uring/register.c: fix NULL pointer dereference in io_register_resize_rings

From: Jens Axboe

Date: Mon Mar 09 2026 - 14:34:50 EST


On 3/9/26 10:29 AM, Jens Axboe wrote:
> On Mar 9, 2026, at 10:05?AM, Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxx> wrote:
>>
>> ?On Mon, 9 Mar 2026 at 06:11, Jens Axboe <axboe@xxxxxxxxx> wrote:
>>>
>>> You probably want something ala:
>>>
>>> mutex_lock(&ctx->mmap_lock);
>>> spin_lock(&ctx->completion_lock();
>>> + local_irq_disable();
>>
>> How could that ever work?
>>
>> Irqs will happily continue on other CPUs, so disabling interrupts is
>> complete nonsense as far as I can tell - whether done with
>> spin_lock_irq() *or* with local_irq_disable()/.
>>
>> So basically, touching ctx->rings from irq context in this section is
>> simply not an option - or the rings pointer just needs to be updated
>> atomically so that it doesn't matter.
>>
>> I assume this was tested in qemu on a single-core setup, so that
>> fundamental mistake was hidden by an irrelevant configuration.
>>
>> Where is the actual oops - for some inexplicable reason that had been
>> edited out, and it only had the call trace leading up toit? Based on
>> the incomplete information and the faulting address of 0x24, I'm
>> *guessing* that it is
>>
>> if (ctx->flags & IORING_SETUP_TASKRUN_FLAG)
>> atomic_or(IORING_SQ_TASKRUN, &ctx->rings->sq_flags);
>>
>> in io_req_normal_work_add(), but that may be complete garbage.
>>
>> So the actual fix may be to just make damn sure that
>> IORING_SETUP_TASKRUN_FLAG is *not* set when the rings are resized.
>>
>> But for all I know, (a) I may be looking at entirely the wrong place
>> and (b) there might be millions of other places that want to access
>> ctx->rings, so the above may be the rantings of a crazy old man.
>
> Nah you?re totally right. I?m operating in few hours of sleep and on a
> plane. I?ll take a closer look later. The flag mask protecting it is a
> good idea, another one could be just a specific irq safe resize lock
> would be better here.

How about something like this? I don't particularly like using ->flags
for this, as these are otherwise static after the ring has been set up.
Hence it'd be better to to just use a separate value for this,
->in_resize, and use smp_load_acquire/release. The write side can be as
expensive as we want it to be, as it's not a hot path at all. And the
acquire read should light weight enough here.


diff --git a/include/linux/io_uring_types.h b/include/linux/io_uring_types.h
index 3e4a82a6f817..428eb5b2c624 100644
--- a/include/linux/io_uring_types.h
+++ b/include/linux/io_uring_types.h
@@ -394,6 +394,7 @@ struct io_ring_ctx {
atomic_t cq_wait_nr;
atomic_t cq_timeouts;
struct wait_queue_head cq_wait;
+ int in_resize;
} ____cacheline_aligned_in_smp;

/* timeouts */
diff --git a/io_uring/register.c b/io_uring/register.c
index 3378014e51fb..048a1dcd9df1 100644
--- a/io_uring/register.c
+++ b/io_uring/register.c
@@ -575,6 +575,7 @@ static int io_register_resize_rings(struct io_ring_ctx *ctx, void __user *arg)
* ctx->mmap_lock as well. Likewise, hold the completion lock over the
* duration of the actual swap.
*/
+ smp_store_release(&ctx->in_resize, 1);
mutex_lock(&ctx->mmap_lock);
spin_lock(&ctx->completion_lock);
o.rings = ctx->rings;
@@ -647,6 +648,7 @@ static int io_register_resize_rings(struct io_ring_ctx *ctx, void __user *arg)
if (ctx->sq_data)
io_sq_thread_unpark(ctx->sq_data);

+ smp_store_release(&ctx->in_resize, 0);
return ret;
}

diff --git a/io_uring/tw.c b/io_uring/tw.c
index 1ee2b8ab07c8..c66ffa787ec7 100644
--- a/io_uring/tw.c
+++ b/io_uring/tw.c
@@ -152,6 +152,13 @@ void tctx_task_work(struct callback_head *cb)
WARN_ON_ONCE(ret);
}

+static void io_mark_taskrun(struct io_ring_ctx *ctx)
+{
+ if (ctx->flags & IORING_SETUP_TASKRUN_FLAG &&
+ !smp_load_acquire(&ctx->in_resize))
+ atomic_or(IORING_SQ_TASKRUN, &ctx->rings->sq_flags);
+}
+
void io_req_local_work_add(struct io_kiocb *req, unsigned flags)
{
struct io_ring_ctx *ctx = req->ctx;
@@ -206,6 +213,7 @@ void io_req_local_work_add(struct io_kiocb *req, unsigned flags)
*/

if (!head) {
+ io_mark_taskrun(ctx);
if (ctx->flags & IORING_SETUP_TASKRUN_FLAG)
atomic_or(IORING_SQ_TASKRUN, &ctx->rings->sq_flags);
if (ctx->has_evfd)
@@ -231,8 +239,7 @@ void io_req_normal_work_add(struct io_kiocb *req)
if (!llist_add(&req->io_task_work.node, &tctx->task_list))
return;

- if (ctx->flags & IORING_SETUP_TASKRUN_FLAG)
- atomic_or(IORING_SQ_TASKRUN, &ctx->rings->sq_flags);
+ io_mark_taskrun(ctx);

/* SQPOLL doesn't need the task_work added, it'll run it itself */
if (ctx->flags & IORING_SETUP_SQPOLL) {

--
Jens Axboe