Re: Slow file transfer speeds with CFQ IO scheduler in some cases

From: Vitaly V. Bursov
Date: Thu Nov 13 2008 - 01:54:24 EST


Jeff Moyer ÐÐÑÐÑ:
> Jens Axboe <jens.axboe@xxxxxxxxxx> writes:
>
>> On Mon, Nov 10 2008, Jeff Moyer wrote:
>>> Jens Axboe <jens.axboe@xxxxxxxxxx> writes:
>>>
>>>> On Sun, Nov 09 2008, Vitaly V. Bursov wrote:
>>>>> Hello,
>>>>>
>>>>> I'm building small server system with openvz kernel and have ran into
>>>>> some IO performance problems. Reading a single file via NFS delivers
>>>>> around 9 MB/s over gigabit network, but while reading, say, 2 different
>>>>> or same file 2 times at the same time I get >60MB/s.
>>>>>
>>>>> Changing IO scheduler to deadline or anticipatory fixes problem.
>>>>>
>>>>> Tested kernels:
>>>>> OpenVZ RHEL5 028stab059.3 (9 MB/s with HZ=100, 20MB/s with HZ=1000
>>>>> fast local reads)
>>>>> Vanilla 2.6.27.5 (40 MB/s with HZ=100, slow local reads)
>>>>>
>>>>> Vanilla performs better in worst case but I believe 40 is still low
>>>>> concerning test results below.
>>>> Can you check with this patch applied?
>>>>
>>>> http://bugzilla.kernel.org/attachment.cgi?id=18473&action=view
>>> Funny, I was going to ask the same question. ;) The reason Jens wants
>>> you to try this patch is that nfsd may be farming off the I/O requests
>>> to different threads which are then performing interleaved I/O. The
>>> above patch tries to detect this and allow cooperating processes to get
>>> disk time instead of waiting for the idle timeout.
>> Precisely :-)
>>
>> The only reason I haven't merged it yet is because of worry of extra
>> cost, but I'll throw some SSD love at it and see how it turns out.
>
> OK, I'm not actually able to reproduce the horrible 9MB/s reported by
> Vitaly. Here are the numbers I see.

It's 2.6.18-openvz-rhel5 kernel gives me 9MB/s, and with 2.6.27 I get ~40-50MB/s
instead of 80-90 MB/s as there should be no bottlenecks except the network.

> Single dd performing a cold cache read of a 1GB file from an
> nfs server. read_ahead_kb is 128 (the default) for all tests.
> cfq-cc denotes that the cfq scheduler was patched with the close
> cooperator patch. All numbers are in MB/s.
>
> nfsd threads| 1 | 2 | 4 | 8
> ----------------------------------------
> deadline | 65.3 | 52.2 | 46.7 | 46.1
> cfq | 64.1 | 57.8 | 53.3 | 46.9
> cfq-cc | 65.7 | 55.8 | 52.1 | 40.3
>
> So, in my configuration, cfq and deadline both degrade in performance as
> the number of nfsd threads is increased. The close cooperator patch
> seems to hurt a bit more at 8 threads, instead of helping; I'm not sure
> why that is.

Interesting, I'll try to change nfsd threads number and see how it performs
on my setup. Setting it to 1 seems like a good idea for cfq and a non-high-end
hardware.

I'll look into it this evening.

--
Regards,
Vitaly
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/