2009/6/30 Vladislav Bolkhovitin <vst@xxxxxxxx>:Wu Fengguang, on 06/30/2009 05:04 AM wrote:On Mon, Jun 29, 2009 at 11:37:41PM +0800, Vladislav Bolkhovitin wrote:Because, as we found out, without your http://lkml.org/lkml/2009/5/21/319Wu Fengguang, on 06/29/2009 07:01 PM wrote:OK, now I tend to agree on decreasing max_sectors_kb and increasingOn Mon, Jun 29, 2009 at 10:21:24PM +0800, Wu Fengguang wrote:Yes, I've just wanted to point it out ;)On Mon, Jun 29, 2009 at 10:00:20PM +0800, Ronald Moesbergen wrote:and the most important one:... tests ...I made a blindless average:
We started with 2.6.29, so why not complete with it (to saveThe results:
additional
Ronald's effort to move on 2.6.30)?
OK. Ronald, can you 4 more test cases, please:2. Default vanilla 2.6.29 kernel, 512 KB read-ahead, the rest isHow about 2MB RAID readahead size? That transforms into about 512KB
default
per-disk readahead size.
7. Default vanilla 2.6.29 kernel, 2MB read-ahead, the rest is default
8. Default vanilla 2.6.29 kernel, 2MB read-ahead, 64 KB
max_sectors_kb, the rest is default
9. Patched by the Fengguang's patch vanilla 2.6.29 kernel, 2MB
read-ahead, the rest is default
10. Patched by the Fengguang's patch vanilla 2.6.29 kernel, 2MB
read-ahead, 64 KB max_sectors_kb, the rest is default
N MB/s IOPS case
0 114.859 984.148 Unpatched, 128KB readahead, 512
max_sectors_kb
1 122.960 981.213 Unpatched, 512KB readahead, 512
max_sectors_kb
2 120.709 985.111 Unpatched, 2MB readahead, 512
max_sectors_kb
3 158.732 1004.714 Unpatched, 512KB readahead, 64
max_sectors_kb
4 159.237 979.659 Unpatched, 2MB readahead, 64
max_sectors_kb
5 114.583 982.998 Patched, 128KB readahead, 512
max_sectors_kb
6 124.902 987.523 Patched, 512KB readahead, 512
max_sectors_kb
7 127.373 984.848 Patched, 2MB readahead, 512
max_sectors_kb
8 161.218 986.698 Patched, 512KB readahead, 64
max_sectors_kb
9 163.908 574.651 Patched, 2MB readahead, 64
max_sectors_kb
So before/after patch:
avg throughput 135.299 => 138.397 by +2.3%
avg IOPS 986.969 => 903.344 by -8.5%
The IOPS is a bit weird.
Summaries:
- this patch improves RAID throughput by +2.3% on average
- after this patch, 2MB readahead performs slightly better
(by 1-2%) than 512KB readahead
- 64 max_sectors_kb performs much better than 256 max_sectors_kb, by
~30% !
read_ahead_kb. But before actually trying to push that idea I'd like
to
- do more benchmarks
- figure out why context readahead didn't help SCST performance
(previous traces show that context readahead is submitting perfect
large io requests, so I wonder if it's some io scheduler bug)
patch read-ahead was nearly disabled, hence there were no difference which
algorithm was used?
Ronald, can you run the following tests, please? This time with 2 hosts,
initiator (client) and target (server) connected using 1 Gbps iSCSI. It
would be the best if on the client vanilla 2.6.29 will be ran, but any other
kernel will be fine as well, only specify which. Blockdev-perftest should be
ran as before in buffered mode, i.e. with "-a" switch.
I could, but: only the first 'dd' run of blockdev-perftest will have
any value, since all others will be served from the target's cache,
won't that make the results pretty much useless (?). Are you sure this
is what you want me to test?