Re: [PATCH] block: recalculate segment count for multi-segment discard requests correctly
From: John Pittman
Date: Mon Feb 08 2021 - 15:24:23 EST
Hi Jens, when you get a moment, could you take a quick look at this one for ack?
On Thu, Feb 4, 2021 at 11:49 AM Laurence Oberman <loberman@xxxxxxxxxx> wrote:
>
> On Thu, 2021-02-04 at 10:27 +0800, Ming Lei wrote:
> > On Mon, Feb 01, 2021 at 11:48:50AM -0500, David Jeffery wrote:
> > > When a stacked block device inserts a request into another block
> > > device
> > > using blk_insert_cloned_request, the request's nr_phys_segments
> > > field gets
> > > recalculated by a call to blk_recalc_rq_segments in
> > > blk_cloned_rq_check_limits. But blk_recalc_rq_segments does not
> > > know how to
> > > handle multi-segment discards. For disk types which can handle
> > > multi-segment discards like nvme, this results in discard requests
> > > which
> > > claim a single segment when it should report several, triggering a
> > > warning
> > > in nvme and causing nvme to fail the discard from the invalid
> > > state.
> > >
> > > WARNING: CPU: 5 PID: 191 at drivers/nvme/host/core.c:700
> > > nvme_setup_discard+0x170/0x1e0 [nvme_core]
> > > ...
> > > nvme_setup_cmd+0x217/0x270 [nvme_core]
> > > nvme_loop_queue_rq+0x51/0x1b0 [nvme_loop]
> > > __blk_mq_try_issue_directly+0xe7/0x1b0
> > > blk_mq_request_issue_directly+0x41/0x70
> > > ? blk_account_io_start+0x40/0x50
> > > dm_mq_queue_rq+0x200/0x3e0
> > > blk_mq_dispatch_rq_list+0x10a/0x7d0
> > > ? __sbitmap_queue_get+0x25/0x90
> > > ? elv_rb_del+0x1f/0x30
> > > ? deadline_remove_request+0x55/0xb0
> > > ? dd_dispatch_request+0x181/0x210
> > > __blk_mq_do_dispatch_sched+0x144/0x290
> > > ? bio_attempt_discard_merge+0x134/0x1f0
> > > __blk_mq_sched_dispatch_requests+0x129/0x180
> > > blk_mq_sched_dispatch_requests+0x30/0x60
> > > __blk_mq_run_hw_queue+0x47/0xe0
> > > __blk_mq_delay_run_hw_queue+0x15b/0x170
> > > blk_mq_sched_insert_requests+0x68/0xe0
> > > blk_mq_flush_plug_list+0xf0/0x170
> > > blk_finish_plug+0x36/0x50
> > > xlog_cil_committed+0x19f/0x290 [xfs]
> > > xlog_cil_process_committed+0x57/0x80 [xfs]
> > > xlog_state_do_callback+0x1e0/0x2a0 [xfs]
> > > xlog_ioend_work+0x2f/0x80 [xfs]
> > > process_one_work+0x1b6/0x350
> > > worker_thread+0x53/0x3e0
> > > ? process_one_work+0x350/0x350
> > > kthread+0x11b/0x140
> > > ? __kthread_bind_mask+0x60/0x60
> > > ret_from_fork+0x22/0x30
> > >
> > > This patch fixes blk_recalc_rq_segments to be aware of devices
> > > which can
> > > have multi-segment discards. It calculates the correct discard
> > > segment
> > > count by counting the number of bio as each discard bio is
> > > considered its
> > > own segment.
> > >
> > > Signed-off-by: David Jeffery <djeffery@xxxxxxxxxx>
> > > Tested-by: Laurence Oberman <loberman@xxxxxxxxxx>
> > > ---
> > > block/blk-merge.c | 7 +++++++
> > > 1 file changed, 7 insertions(+)
> > >
> > > diff --git a/block/blk-merge.c b/block/blk-merge.c
> > > index 808768f6b174..fe7358bd5d09 100644
> > > --- a/block/blk-merge.c
> > > +++ b/block/blk-merge.c
> > > @@ -382,6 +382,13 @@ unsigned int blk_recalc_rq_segments(struct
> > > request *rq)
> > >
> > > switch (bio_op(rq->bio)) {
> > > case REQ_OP_DISCARD:
> > > + if (queue_max_discard_segments(rq->q) > 1) {
> > > + struct bio *bio = rq->bio;
> > > + for_each_bio(bio)
> > > + nr_phys_segs++;
> > > + return nr_phys_segs;
> > > + }
> > > + /* fall through */
> > > case REQ_OP_SECURE_ERASE:
> > > case REQ_OP_WRITE_ZEROES:
> > > return 0;
> >
> > blk_rq_nr_discard_segments() always returns >=1 segments, so no
> > similar
> > issue in case of single range discard.
> >
> > Reviewed-by: Ming Lei <ming.lei@xxxxxxxxxx>
> >
> > And it can be thought as:
> >
> > Fixes: 1e739730c5b9 ("block: optionally merge discontiguous discard
> > bios into a single request")
> >
> >
>
> Great, can we get enough acks and push this through its urgent for me
> Reviewed-by: Laurence Oberman <loberman@xxxxxxxxxx>
>