Re: [PATCH -next] raid10: fix leak of io accounting

From: Guoqing Jiang
Date: Thu Mar 09 2023 - 02:27:22 EST




On 3/9/23 14:56, Yu Kuai wrote:
Hi,

在 2023/03/09 14:36, Guoqing Jiang 写道:
Hi,

What do you mean 'leak' here?

I try to mean that inflight counting is leaked, because it's increased
twice for one io.

How about change the subject to something like?

'md/raid10: Don't call bio_start_io_acct twice for bio which experienced read error'



On 3/4/23 15:01, Yu Kuai wrote:
From: Yu Kuai <yukuai3@xxxxxxxxxx>

handle_read_error() will resumit r10_bio by raid10_read_request(), which
will call bio_start_io_acct() again, while bio_end_io_acct() will only
be called once.

Fix the problem by don't account io again from handle_read_error().

My understanding is it caused inaccurate io stats for bio which had a read
error.

Fixes: 528bc2cf2fcc ("md/raid10: enable io accounting")
Signed-off-by: Yu Kuai <yukuai3@xxxxxxxxxx>
---
  drivers/md/raid10.c | 8 ++++----
  1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
index 6c66357f92f5..4f8edb6ea3e2 100644
--- a/drivers/md/raid10.c
+++ b/drivers/md/raid10.c
@@ -1173,7 +1173,7 @@ static bool regular_request_wait(struct mddev *mddev, struct r10conf *conf,
  }
  static void raid10_read_request(struct mddev *mddev, struct bio *bio,
-                struct r10bio *r10_bio)
+                struct r10bio *r10_bio, bool handle_error)
  {
      struct r10conf *conf = mddev->private;
      struct bio *read_bio;
@@ -1244,7 +1244,7 @@ static void raid10_read_request(struct mddev *mddev, struct bio *bio,
      }
      slot = r10_bio->read_slot;
-    if (blk_queue_io_stat(bio->bi_bdev->bd_disk->queue))
+    if (!handle_error && blk_queue_io_stat(bio->bi_bdev->bd_disk->queue))
          r10_bio->start_time = bio_start_io_acct(bio);

I think a simpler way is just check R10BIO_ReadError here.

No, I'm afraid this is incorrect because handle_read_error clears the
state before resubmiting the r10bio.

Right,

Acked-by: Guoqing Jiang <guoqing.jiang@xxxxxxxxx>

Thanks,
Guoqing