Re: [PATCH] block/mq-deadline: Fix WARN when set async_depth by sysfs

From: Zhiguo Niu
Date: Wed Apr 03 2024 - 01:41:50 EST


On Wed, Apr 3, 2024 at 7:20 AM Bart Van Assche <bvanassche@xxxxxxx> wrote:
>
> On 4/1/24 10:44 PM, Zhiguo Niu wrote:
> > On Tue, Apr 2, 2024 at 5:23 AM Bart Van Assche <bvanassche@xxxxxxx> wrote:
> >> Oops, I misread your patch. After having taken another look, my
> >> conclusions are as follows:
> >> * sbitmap_queue_min_shallow_depth() is called. This causes
> >> sbq->wake_batch to be modified but I don't think that it is a proper
> >> fix for dd_limit_depth().
> > yes, it will affect sbq->wake_batch, But judging from the following code:
> > [ ... ]
>
> If we want to allow small values of dd->async_depth, min_shallow_depth
> must be 1. The BFQ I/O scheduler also follows this approach.
>
> >> * dd_limit_depth() still assigns a number in the range 1..nr_requests to
> >> data->shallow_depth while a number in the range 1..(1<<bt->sb.shift)
> >> should be assigned.
> > yes, In order to avoid the performance regression problem that Harshit
> > Mogalapalli reported, this patch will not directly modify
> > dd->async_depth,
> > but user can modify dd->async_depth from sysfs according to user's
> > request. which will modify data->shallow_depth after user modify it by
> > sysfs.
>
> It seems like there is no other option than keeping the current default
> depth limit for async requests ...
>
> > My personal opinion is to keep the current dd->aync_depth unchanged to
> > avoid causing performance regression,
> > but it should allow users to set it by sysfs, and the WARN mentioned
> > best to be solved.
> > and just only change this part?
> > - sbitmap_queue_min_shallow_depth(&tags->bitmap_tags, dd->async_depth);
> > + sbitmap_queue_min_shallow_depth(&tags->bitmap_tags, 1);
> > thanks!
>
> The above change will suppress the kernel warning. I think the other
> changes from patch 2/2 are also necessary. Otherwise the unit of
> "async_depth" will depend on sbitmap word shift parameter. I don't think
> that users should have to worry about which shift value has been chosen
> by the sbitmap implementation.
Hi Bart Van Assche,
So will you help do an official patch version about patch 1 and patch 2?

Besides, I am not sure that the following part will cause
performance regression or not?
+static int dd_to_word_depth(struct blk_mq_hw_ctx *hctx, unsigned int
qdepth)
+{
+ struct sbitmap_queue *bt = &hctx->sched_tags->bitmap_tags;
+ const unsigned int nrr = hctx->queue->nr_requests;
+
+ return max(((qdepth << bt->sb.shift) + nrr - 1) / nrr,
+ bt->min_shallow_depth);
+}
+
which is somewhat similar to the previous commit d47f9717e5cf
("block/mq-deadline: use correct way to throttling write
requests"). just an example
hw conditions: 8 cpus,emmc flash, one hw queue, and queue_depth=64,
sched_tag/nr_request=128, init dd->async_depth=96
bt->sb.shift=5,
so max(((qdepth << bt->sb.shift) + nrr - 1) / nrr,bt->min_shallow_depth);
will get 24 and commit d47f9717e5cf ("block/mq-deadline: use correct
way to throttling write
requests") also get 24, but your revert commit 256aab46e316 ("Revert
"block/mq-deadline: use
correct way to throttling write requests"") will get 96,
or am I missing something else?
thanks!
>
> Thanks,
>
> Bart.
>