Re: [PATCH] null_blk: fix poll request timeout handling
From: Chengming Zhou
Date: Wed Aug 16 2023 - 05:16:02 EST
On 2023/8/15 16:19, David Howells wrote:
> chengming.zhou@xxxxxxxxx wrote:
>
>> From: Chengming Zhou <zhouchengming@xxxxxxxxxxxxx>
>>
>> When doing io_uring benchmark on /dev/nullb0, it's easy to crash the
>> kernel if poll requests timeout triggered, as reported by David. [1]
>>
>> BUG: kernel NULL pointer dereference, address: 0000000000000008
>> Workqueue: kblockd blk_mq_timeout_work
>> RIP: 0010:null_timeout_rq+0x4e/0x91
>> Call Trace:
>> ? __die_body+0x1a/0x5c
>> ? page_fault_oops+0x6f/0x9c
>> ? kernelmode_fixup_or_oops+0xc6/0xd6
>> ? __bad_area_nosemaphore+0x44/0x1eb
>> ? exc_page_fault+0xe2/0xf4
>> ? asm_exc_page_fault+0x22/0x30
>> ? null_timeout_rq+0x4e/0x91
>> blk_mq_handle_expired+0x31/0x4b
>> bt_iter+0x68/0x84
>> ? bt_tags_iter+0x81/0x81
>> __sbitmap_for_each_set.constprop.0+0xb0/0xf2
>> ? __blk_mq_complete_request_remote+0xf/0xf
>> bt_for_each+0x46/0x64
>> ? __blk_mq_complete_request_remote+0xf/0xf
>> ? percpu_ref_get_many+0xc/0x2a
>> blk_mq_queue_tag_busy_iter+0x14d/0x18e
>> blk_mq_timeout_work+0x95/0x127
>> process_one_work+0x185/0x263
>> worker_thread+0x1b5/0x227
>> ? rescuer_thread+0x287/0x287
>> kthread+0xfa/0x102
>> ? kthread_complete_and_exit+0x1b/0x1b
>> ret_from_fork+0x22/0x30
>>
>> This is indeed a race problem between null_timeout_rq() and null_poll().
>>
>> null_poll() null_timeout_rq()
>> spin_lock(&nq->poll_lock)
>> list_splice_init(&nq->poll_list, &list)
>> spin_unlock(&nq->poll_lock)
>>
>> while (!list_empty(&list))
>> req = list_first_entry()
>> list_del_init()
>> ...
>> blk_mq_add_to_batch()
>> // req->rq_next = NULL
>> spin_lock(&nq->poll_lock)
>>
>> // rq->queuelist->next == NULL
>> list_del_init(&rq->queuelist)
>>
>> spin_unlock(&nq->poll_lock)
>>
>> What's worse is that we don't call blk_mq_complete_request_remote()
>> before blk_mq_add_to_batch(), so these completed requests have wrong
>> rq->state == MQ_RQ_IN_FLIGHT. We can easily check this using bpftrace:
>>
>> ```
>> bpftrace -e 'kretfunc:null_blk:null_poll {
>> $iob=(struct io_comp_batch *)args->iob;
>> @[$iob->req_list->state]=count();
>> }'
>>
>> @[1]: 51708
>> ```
>>
>> Fix these problems by setting requests state to MQ_RQ_COMPLETE under
>> nq->poll_lock protection, in which null_timeout_rq() can safely detect
>> this race and early return.
>>
>> [1] https://lore.kernel.org/all/3893581.1691785261@xxxxxxxxxxxxxxxxxxxxxx/
>>
>> Fixes: 0a593fbbc245 ("null_blk: poll queue support")
>> Reported-by: David Howells <dhowells@xxxxxxxxxx>
>> Signed-off-by: Chengming Zhou <zhouchengming@xxxxxxxxxxxxx>
>
> Okay, the oops no longer seems to happen, so on that basis:
>
> Tested-by: David Howells <dhowells@xxxxxxxxxx>
>
Yes, this patch just fixes the kernel oops when request timeout happened.
>
> However, running:
>
> ./fio/t/io_uring -n4 /dev/nullb0
>
> and then interrupting it with ctrl-C after a while dumps a whole load of
> messages into the dmesg log (excerpt attached). It seems wrong that the user
> should be able to generate a dump like this just by interrupting - but I guess
> as it's null_blk it probably doesn't matter.
I can reproduce it, maybe an issue in io_uring. Although io_uring will reap
all pending requests when task exit, it seems that it will block for some
seconds before doing reap, so timeout happen. I'm not sure, just some guess ;-)
Thanks.
>
> David
> ---
> null_blk: rq 00000000bb2d3264 timed out
> timeout error, dev nullb0, sector 328372624 op 0x0:(READ) flags 0xe00000 phys_seg 1 prio class 2
> null_blk: rq 00000000abcc1075 timed out
> timeout error, dev nullb0, sector 378610072 op 0x0:(READ) flags 0xe00000 phys_seg 1 prio class 2
> null_blk: rq 00000000d4bdc71f timed out
> timeout error, dev nullb0, sector 185005312 op 0x0:(READ) flags 0xe00000 phys_seg 1 prio class 2
> null_blk: rq 00000000f4ffddee timed out
> timeout error, dev nullb0, sector 206118608 op 0x0:(READ) flags 0xe00000 phys_seg 1 prio class 2
> null_blk: rq 000000001e68b709 timed out
> timeout error, dev nullb0, sector 310381160 op 0x0:(READ) flags 0xe00000 phys_seg 1 prio class 2
> null_blk: rq 00000000bfeafe97 timed out
> timeout error, dev nullb0, sector 52036480 op 0x0:(READ) flags 0xe00000 phys_seg 1 prio class 2
> null_blk: rq 00000000aa67d21c timed out
> timeout error, dev nullb0, sector 22746448 op 0x0:(READ) flags 0xe00000 phys_seg 1 prio class 2
> null_blk: rq 00000000faec1291 timed out
> timeout error, dev nullb0, sector 391201440 op 0x0:(READ) flags 0xe00000 phys_seg 1 prio class 2
> null_blk: rq 00000000c634428c timed out
> timeout error, dev nullb0, sector 237216136 op 0x0:(READ) flags 0xe00000 phys_seg 1 prio class 2
> null_blk: rq 0000000077f91a5d timed out
> timeout error, dev nullb0, sector 453778912 op 0x0:(READ) flags 0xe00000 phys_seg 1 prio class 2
> null_blk: rq 000000003076467c timed out
> null_blk: rq 000000009c172678 timed out
> null_blk: rq 000000002df50b48 timed out
> null_blk: rq 00000000e4c66900 timed out
> null_blk: rq 0000000082606e31 timed out
> null_blk: rq 00000000fe21ffdc timed out
> null_blk: rq 000000005e5c5173 timed out
> null_blk: rq 00000000b0a0d20c timed out
> null_blk: rq 000000008c729e47 timed out
> null_blk: rq 00000000970f75a0 timed out
> null_blk: rq 000000002ad3c45a timed out
>