Re: [RFC 0/3] reduce latency of direct async compaction
From: Aaron Lu
Date: Thu Dec 10 2015 - 01:16:41 EST
On 12/10/2015 12:35 PM, Joonsoo Kim wrote:
> On Wed, Dec 09, 2015 at 01:40:06PM +0800, Aaron Lu wrote:
>> On Wed, Dec 09, 2015 at 09:33:53AM +0900, Joonsoo Kim wrote:
>>> On Tue, Dec 08, 2015 at 04:52:42PM +0800, Aaron Lu wrote:
>>>> On Tue, Dec 08, 2015 at 03:51:16PM +0900, Joonsoo Kim wrote:
>>>>> I add work-around for this problem at isolate_freepages(). Please test
>>>>> following one.
>>>>
>>>> Still no luck and the error is about the same:
>>>
>>> There is a mistake... Could you insert () for
>>> cc->free_pfn & ~(pageblock_nr_pages-1) like as following?
>>>
>>> cc->free_pfn == (cc->free_pfn & ~(pageblock_nr_pages-1))
>>
>> Oh right, of course.
>>
>> Good news, the result is much better now:
>> $ cat {0..8}/swap
>> cmdline: /lkp/aaron/src/bin/usemem 100064603136
>> 100064603136 transferred in 72 seconds, throughput: 1325 MB/s
>> cmdline: /lkp/aaron/src/bin/usemem 100072049664
>> 100072049664 transferred in 74 seconds, throughput: 1289 MB/s
>> cmdline: /lkp/aaron/src/bin/usemem 100070246400
>> 100070246400 transferred in 92 seconds, throughput: 1037 MB/s
>> cmdline: /lkp/aaron/src/bin/usemem 100069545984
>> 100069545984 transferred in 81 seconds, throughput: 1178 MB/s
>> cmdline: /lkp/aaron/src/bin/usemem 100058895360
>> 100058895360 transferred in 78 seconds, throughput: 1223 MB/s
>> cmdline: /lkp/aaron/src/bin/usemem 100066074624
>> 100066074624 transferred in 94 seconds, throughput: 1015 MB/s
>> cmdline: /lkp/aaron/src/bin/usemem 100062855168
>> 100062855168 transferred in 77 seconds, throughput: 1239 MB/s
>> cmdline: /lkp/aaron/src/bin/usemem 100060990464
>> 100060990464 transferred in 73 seconds, throughput: 1307 MB/s
>> cmdline: /lkp/aaron/src/bin/usemem 100064996352
>> 100064996352 transferred in 84 seconds, throughput: 1136 MB/s
>> Max: 1325 MB/s
>> Min: 1015 MB/s
>> Avg: 1194 MB/s
>
> Nice result! Thanks for testing.
> I will make a proper formatted patch soon.
Thanks for the nice work.
>
> Then, your concern is solved? I think that performance of
I think so.
> always-always on this test case can't follow up performance of
> always-never because migration cost to make hugepage is additionally
> charged to always-always case. Instead, it will have more hugepage
> mapping and it may result in better performance in some situation.
> I guess that it is intention of that option.
OK, I see.
Regards,
Aaron
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/