Re: [RESEND] [PATCH] readahead:add blk_run_backing_dev

From: Vladislav Bolkhovitin
Date: Fri Jul 31 2009 - 14:32:25 EST



Ronald Moesbergen, on 07/29/2009 04:48 PM wrote:
2009/7/28 Vladislav Bolkhovitin <vst@xxxxxxxx>:
Can you perform the tests 5 and 8 the deadline? I asked for deadline..

What I/O scheduler do you use on the initiator? Can you check if changing it
to deadline or noop makes any difference?


client kernel: 2.6.26-15lenny3 (debian)
server kernel: 2.6.29.5 with readahead-context, blk_run_backing_dev
and io_context, forced_order

With one IO thread:
5) client: default, server: default (server deadline, client cfq)
blocksize R R R R(avg, R(std R
(bytes) (s) (s) (s) MB/s) ,MB/s) (IOPS)
67108864 15.739 15.339 16.511 64.613 1.959 1.010
33554432 15.411 12.384 15.400 71.876 7.646 2.246
16777216 16.564 15.569 16.279 63.498 1.667 3.969

5) client: default, server: default (server deadline, client deadline)
blocksize R R R R(avg, R(std R
(bytes) (s) (s) (s) MB/s) ,MB/s) (IOPS)
67108864 17.578 20.051 18.010 55.395 3.111 0.866
33554432 19.247 12.607 17.930 63.846 12.390 1.995
16777216 14.587 19.631 18.032 59.718 7.650 3.732

8) client: default, server: 64 max_sectors_kb, RA 2MB (server
deadline, client deadline)
blocksize R R R R(avg, R(std R
(bytes) (s) (s) (s) MB/s) ,MB/s) (IOPS)
67108864 17.418 19.520 22.050 52.564 5.043 0.821
33554432 21.263 17.623 17.782 54.616 4.571 1.707
16777216 17.896 18.335 19.407 55.278 1.864 3.455

8) client: default, server: 64 max_sectors_kb, RA 2MB (server
deadline, client cfq)
blocksize R R R R(avg, R(std R
(bytes) (s) (s) (s) MB/s) ,MB/s) (IOPS)
67108864 16.639 15.216 16.035 64.233 2.365 1.004
33554432 15.750 16.511 16.092 63.557 1.224 1.986
16777216 16.390 15.866 15.331 64.604 1.763 4.038

11) client: 2MB RA, 64 max_sectors_kb, server: 64 max_sectors_kb, RA
2MB (server deadline, client deadline)
blocksize R R R R(avg, R(std R
(bytes) (s) (s) (s) MB/s) ,MB/s) (IOPS)
67108864 14.117 13.610 13.558 74.435 1.347 1.163
33554432 13.450 10.344 13.556 83.555 10.918 2.611
16777216 13.408 13.319 13.239 76.867 0.398 4.804

With two threads:
5) client: default, server: default (server deadline, client cfq)
blocksize R R R R(avg, R(std R
(bytes) (s) (s) (s) MB/s) ,MB/s) (IOPS)
67108864 15.723 16.535 16.189 63.438 1.312 0.991
33554432 16.152 16.363 15.782 63.621 0.954 1.988
16777216 15.174 16.084 16.682 64.178 2.516 4.011

5) client: default, server: default (server deadline, client deadline)
blocksize R R R R(avg, R(std R
(bytes) (s) (s) (s) MB/s) ,MB/s) (IOPS)
67108864 18.087 18.082 17.639 57.099 0.674 0.892
33554432 18.377 15.750 17.551 59.694 3.912 1.865
16777216 18.490 15.553 18.778 58.585 5.143 3.662

8) client: default, server: 64 max_sectors_kb, RA 2MB (server
deadline, client deadline)
blocksize R R R R(avg, R(std R
(bytes) (s) (s) (s) MB/s) ,MB/s) (IOPS)
67108864 18.140 19.114 17.442 56.244 2.103 0.879
33554432 17.183 17.233 21.367 55.646 5.461 1.739
16777216 19.813 17.965 18.132 55.053 2.393 3.441

8) client: default, server: 64 max_sectors_kb, RA 2MB (server
deadline, client cfq)
blocksize R R R R(avg, R(std R
(bytes) (s) (s) (s) MB/s) ,MB/s) (IOPS)
67108864 15.753 16.085 16.522 63.548 1.239 0.993
33554432 13.502 15.912 15.507 68.743 5.065 2.148
16777216 16.584 16.171 15.959 63.077 1.003 3.942

11) client: 2MB RA, 64 max_sectors_kb, server: 64 max_sectors_kb, RA
2MB (server deadline, client deadline)
blocksize R R R R(avg, R(std R
(bytes) (s) (s) (s) MB/s) ,MB/s) (IOPS)
67108864 14.051 13.427 13.498 75.001 1.510 1.172
33554432 13.397 14.008 13.453 75.217 1.503 2.351
16777216 13.277 9.942 14.318 83.882 13.712 5.243

OK, as I expected, on the SCST level everything is clear and the forced ordering change didn't change anything.

But still, a single read stream must be the fastest from single thread. Otherwise, there's something wrong somewhere in the I/O path: block layer, RA, I/O scheduler. And, apparently, this is what we have and should find out the cause.

Can you check if noop on the target and/or initiator makes any difference? Case 5 with 1 and 2 threads will be sufficient.

Thanks,
Vlad

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/