Re: FIO performance regression in 4.11 kernel vs. 4.10 kernel observed on ARM64
From: Arnd Bergmann
Date: Mon May 08 2017 - 07:19:07 EST
On Mon, May 8, 2017 at 1:07 PM, Will Deacon <will.deacon@xxxxxxx> wrote:
> Hi Scott,
>
> Thanks for the report.
>
> On Fri, May 05, 2017 at 06:37:55PM -0700, Scott Branden wrote:
>> I have updated the kernel to 4.11 and see significant performance
>> drops using fio-2.9.
>>
>> Using FIO the performanced drops from 281 KIOPS to 207 KIOPS using
>> single core and task.
>> Percent performance drop becomes even worse if multi-cores and multi-
>> threads are used.
>>
>> Platform is ARM64 based A72. Can somebody reproduce the results or
>> know what may have changed to make such a dramatic change?
>>
>> FIO command and resulting log output below using null_blk to remove
>> as many hardware specific driver dependencies as possible.
>>
>> modprobe null_blk queue_mode=2 irqmode=0 completion_nsec=0
>> submit_queues=1 bs=4096
>>
>> taskset 0x1 fio --randrepeat=1 --ioengine=libaio --direct=1 --numjobs=1
>> --gtod_reduce=1 --name=readtest --filename=/dev/nullb0 --bs=4k
>> --iodepth=128 --time_based --runtime=15 --readwrite=read
>
> I can confirm that I also see a ~20% drop in results from 4.10 to 4.11 on
> my AMD Seattle board w/ defconfig, but I can't see anything obvious in the
> log.
>
> Things you could try:
>
> 1. Try disabling CONFIG_NUMA in the 4.11 kernel (this was enabled in
> defconfig between the releases).
>
> 2. Try to reproduce on an x86 box
>
> 3. Have a go at bisecting the issue, so we can revert the offender if
> necessary.
One more thing to try early: As 4.11 gained support for blk-mq I/O
schedulers compared to 4.10, null_blk will now also need some extra
cycles for each I/O request. Try loading the driver with "queue_mode=0"
or "queue_mode=1" instead of "queue_mode=2".
Arnd