RE: [PATCH] [v2] nvme: use correct upper limit for tag in nvme_handle_cqe()

From: Tianxianting
Date: Sun Sep 20 2020 - 04:28:36 EST


Hi,
I test and get the init flow of nvme admin queue and io queue in kernel 5.6,
Currently, the code use nvmeq->q_depth as the upper limit for tag in nvme_handle_cqe(), according to below init flow, we already have the race currently.

Admin queue init flow:
1, set nvmeq->q_depth 32 for admin queue;
2, register irq handler(nvme_irq) for admin queue 0;
3, set admin_tagset.queue_depth to 30 and alloc rqs;
4, nvme irq happen on admin queue;

IO queue init flow:
1, set nvmeq->q_depth 1024 for io queue 1~16;
2, register irq handler(nvme_irq) for io queue 1~16;
3, set tagset.queue_depth to 1023 and alloc rqs;
4, nvme irq happen on io queue;

So we have two issues need to fix:
1, register interrupt handler(nvme_irq) after tagset init(step 3 above) done to avoid a race;
2, use correct upper limit(queue_depth in tagset) for tag in nvme_handle_cqe(), which is the issue that will be solved in this patch I submitted.

Is it right? Thanks a lot.

-----Original Message-----
From: tianxianting (RD)
Sent: Saturday, September 19, 2020 11:15 AM
To: 'Keith Busch' <kbusch@xxxxxxxxxx>
Cc: axboe@xxxxxx; hch@xxxxxx; sagi@xxxxxxxxxxx; linux-nvme@xxxxxxxxxxxxxxxxxxx; linux-kernel@xxxxxxxxxxxxxxx
Subject: RE: [PATCH] [v2] nvme: use correct upper limit for tag in nvme_handle_cqe()

Hi Keith,
Thanks a lot for your comments,
I will try to figure out a safe fix for this issue, then for you review:)

-----Original Message-----
From: Keith Busch [mailto:kbusch@xxxxxxxxxx]
Sent: Saturday, September 19, 2020 3:21 AM
To: tianxianting (RD) <tian.xianting@xxxxxxx>
Cc: axboe@xxxxxx; hch@xxxxxx; sagi@xxxxxxxxxxx; linux-nvme@xxxxxxxxxxxxxxxxxxx; linux-kernel@xxxxxxxxxxxxxxx
Subject: Re: [PATCH] [v2] nvme: use correct upper limit for tag in nvme_handle_cqe()

On Fri, Sep 18, 2020 at 06:44:20PM +0800, Xianting Tian wrote:
> @@ -940,7 +940,9 @@ static inline void nvme_handle_cqe(struct nvme_queue *nvmeq, u16 idx)
> struct nvme_completion *cqe = &nvmeq->cqes[idx];
> struct request *req;
>
> - if (unlikely(cqe->command_id >= nvmeq->q_depth)) {
> + if (unlikely(cqe->command_id >=
> + nvmeq->qid ? nvmeq->dev->tagset.queue_depth :
> + nvmeq->dev->admin_tagset.queue_depth)) {

Both of these values are set before blk_mq_alloc_tag_set(), so you still have a race. The interrupt handler probably just shouldn't be registered with the queue before the tagset is initialized since there can't be any work for the handler to do before that happens anyway.

The controller is definitely broken, though, and will lead to unavoidable corruption if it's really behaving this way.