Re: [PATCH BUGFIX] block, bfq: postpone rq preparation to insert or merge

From: Mike Galbraith
Date: Mon May 07 2018 - 01:56:55 EST


On Sun, 2018-05-06 at 09:42 +0200, Paolo Valente wrote:
>
> diff --git a/block/bfq-mq-iosched.c b/block/bfq-mq-iosched.c
> index 118f319af7c0..6662efe29b69 100644
> --- a/block/bfq-mq-iosched.c
> +++ b/block/bfq-mq-iosched.c
> @@ -525,8 +525,13 @@ static void bfq_limit_depth(unsigned int op, struct blk_mq_alloc_data *data)
> if (unlikely(bfqd->sb_shift != bt->sb.shift))
> bfq_update_depths(bfqd, bt);
>
> +#if 0
> data->shallow_depth =
> bfqd->word_depths[!!bfqd->wr_busy_queues][op_is_sync(op)];
^^^^^^^^^^^^^

Q: why doesn't the top of this function look like so?

---
block/bfq-iosched.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

--- a/block/bfq-iosched.c
+++ b/block/bfq-iosched.c
@@ -539,7 +539,7 @@ static void bfq_limit_depth(unsigned int
struct bfq_data *bfqd = data->q->elevator->elevator_data;
struct sbitmap_queue *bt;

- if (op_is_sync(op) && !op_is_write(op))
+ if (!op_is_write(op))
return;

if (data->flags & BLK_MQ_REQ_RESERVED) {

It looks a bit odd that these elements exist...

+       /*
+        * no more than 75% of tags for sync writes (25% extra tags
+        * w.r.t. async I/O, to prevent async I/O from starving sync
+        * writes)
+        */
+       bfqd->word_depths[0][1] = max(((1U<<bfqd->sb_shift) * 3)>>2, 1U);

+       /* no more than ~37% of tags for sync writes (~20% extra tags) */
+       bfqd->word_depths[1][1] = max(((1U<<bfqd->sb_shift) * 6)>>4, 1U);

...yet we index via and log a guaranteed zero.

-Mike