Re: Multi-file USB mass-storage copy from PC to Nokia N900 slow when using CFQ

From: Paul Hartman
Date: Tue Jan 12 2010 - 00:37:01 EST


On Thu, Jan 7, 2010 at 7:41 AM, Corrado Zoccolo <czoccolo@xxxxxxxxx> wrote:
> On Tue, Jan 5, 2010 at 5:07 PM, Paul Hartman
> <paul.hartman+linux@xxxxxxxxx> wrote:
>> Hi,
>>
>> Copying more than one file from my PC (kernel 2.6.32) to my Nokia N900
>> over USB mass storage mode is very slow when CFQ is the i/o scheduler.
>> The target uses vfat filesystem.
>>
>> I am using iotop to monitor the I/O in general, plus I performed the
>> following test. file1 and file2 are each 700M and both housed on a
>> ramdrive for this test. They were deleted from the destination between
>> runs.
>>
>> # one file at a time with sync in-between, fast speeds:
>> $ sync; time sh -c "cp file1 /mnt/usb; sync; cp file2 /mnt/usb; sync"
>>
>> real 1m25.697s
>> user 0m0.005s
>> sys 0m2.509s
>>
>> # copy two files in a row, then sync, speed is bad:
>> $ sync; time sh -c "cp file1 file2 /mnt/usb; sync"
>>
>> real 6m51.439s
>> user 0m0.007s
>> sys 0m2.615s
>>
>>
>> Using all I/O schedulers, the speed of the first test was the same. So
>> it's only related to writing more than 1 file to the N900. The timing
>> results for the second test ended up as such:
>>
>> cfq: 6m51.439s
>> noop: 3m0.733s
>> anticipatory: 1m44.348s
>> deadline: 1m36.804s
>>
>>
>> Also, in 2.6.31 the speed was four times slower, so the removal of old
>> pdflush code may have made a difference in this case. Copying 1
>> gigabyte takes about 1 minute at optimal speed, about 5 minutes using
>> CFQ in kernel 2.6.32, and took about 20 minutes using CFQ in kernel
>> 2.6.31.
>>
>> I thought you may be interested in case there's room to improve the
>> scheduler. If you want any other info let me know!
>
> Can you try setting:
> /sys/block/**your_device** /queue/iosched/fifo_expire_sync
> to a large number, e.g. 5000 ?
> CFQ operates pretty much like deadline w.r.t. normal writes, but uses
> a much shorter interval to switch between two streams of writes, to
> reduce latency of data hitting disk, and this could cause low
> performance on flash devices, where writes that do not cover whole
> blocks are painfully slow.
>
> Corrado

Hi Corrado,

>From previous test this value was 125. Now I'll try 5000 as you suggest:

real 6m25.435s
user 0m0.006s
sys 0m2.536s

So it's about the same result 5000 as with 125.

Thanks
Paul
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/