Re: [PATCH v2 0/9] x86/clear_huge_page: multi-page clearing

From: Ankur Arora
Date: Tue Sep 05 2023 - 15:38:26 EST



Raghavendra K T <raghavendra.kt@xxxxxxx> writes:

> On 8/31/2023 12:19 AM, Ankur Arora wrote:
>> This series adds a multi-page clearing primitive, clear_pages(),
>> which enables more effective use of x86 string instructions by
>> advertising the real region-size to be cleared.
>> Region-size can be used as a hint by uarchs to optimize the
>> clearing.
>> Also add allow_resched() which marks a code-section as allowing
>> rescheduling in the irqentry_exit path. This allows clear_pages()
>> to get by without having to call cond_sched() periodically.
>> (preempt_model_full() already handles this via
>> irqentry_exit_cond_resched(), so we handle this similarly for
>> preempt_model_none() and preempt_model_voluntary().)
>>
>
> Hello Ankur,
> Thansk for the patches.
>
> I tried the patches, Improvements look similar to V1 (even without
> circuitous chunk optimizations.)

Thanks for testing Raghu.

> STill we see similar 50-60% improvement for 1G and 2M page sizes.
>
> SUT: Bergamo
> CPU family: 25
> Model: 160
> Thread(s) per core: 2
> Core(s) per socket: 128
> Socket(s): 2
>
> NUMA:
> NUMA node(s): 2
> NUMA node0 CPU(s): 0-127,256-383
> NUMA node1 CPU(s): 128-255,384-511
>
> Test: Use mmap(MAP_HUGETLB) to demand a fault on 64GB region (NUMA node0), for
> both base-hugepage-size=2M and 1GB
> Current result is with thp = always, but madv also did not make much difference.
> perf stat -r 10 -d -d numactl -m 0 -N 0 <test>
>
> time in seconds elapsed (average of 10 runs) (lower = better)
>
> Result:
> base: mm/clear_huge_page
> patched: x86/clear_huge_page
>
> page-size base patched Improvement %
> 2M 5.0779 2.50623 50.64
> 1G 2.50623 1.012439 59.60

Seems like Bergamo improves over Milan for both 4K BW, and also
for extent=2MB/extent=1GB.

> Please feel free to carry:
>
> Tested-by: Raghavendra K T <raghavendra.kt@xxxxxxx>
> for any minor changes.

Thank you. Will add.

--
ankur