Re: [PATCH v3] irqchip: gicv3-its: Use NUMA aware memory allocation for ITS tables
From: Marc Zyngier
Date: Thu Dec 13 2018 - 06:54:49 EST
On 13/12/2018 10:59, Shameer Kolothum wrote:
> From: Shanker Donthineni <shankerd@xxxxxxxxxxxxxx>
>
> The NUMA node information is visible to ITS driver but not being used
> other than handling hardware errata. ITS/GICR hardware accesses to the
> local NUMA node is usually quicker than the remote NUMA node. How slow
> the remote NUMA accesses are depends on the implementation details.
>
> This patch allocates memory for ITS management tables and command
> queue from the corresponding NUMA node using the appropriate NUMA
> aware functions. This change improves the performance of the ITS
> tables read latency on systems where it has more than one ITS block,
> and with the slower inter node accesses.
>
> Apache Web server benchmarking using ab tool on a HiSilicon D06
> board with multiple numa mem nodes shows Time per request and
> Transfer rate improvements of ~3.6% with this patch.
>
> Signed-off-by: Shanker Donthineni <shankerd@xxxxxxxxxxxxxx>
> Signed-off-by: Hanjun Guo <guohanjun@xxxxxxxxxx>
> Signed-off-by: Shameer Kolothum <shameerali.kolothum.thodi@xxxxxxxxxx>
> ---
>
> This is to revive the patch originally sent by Shanker[1] and
> to back it up with a benchmark test. Any further testing of
> this is most welcome.
>
> v2-->v3
> -Addressed comments to use page_address().
> -Added Benchmark results to commit log.
> -Removed T-by from Ganapatrao for now.
>
> v1-->v2
> -Edited commit text.
> -Added Ganapatrao's tested-by.
>
> Benchmark test details:
> --------------------------------
> Test Setup:
> -D06 with dimm on node 0(Sock#0) and 3 (Sock#1).
> -ITS belongs to numa node 0.
> -Filesystem mounted on a PCIe NVMe based disk.
> -Apache server installed on D06.
> -Running ab benchmark test in concurrency mode from a remote m/c
> connected to D06 via hns3(PCIe) n/w port.
> "ab -k -c 750 -n 2000000 http://10.202.225.188/"
>
> Test results are avg. of 15 runs.
>
> For 4.20-rc1 Kernel,
> ----------------------------
> Time per request(mean, concurrent) = 0.02753[ms]
> Transfer Rate = 416501[Kbytes/sec]
>
> For 4.20-rc1 + this patch,
> ----------------------------------
> Time per request(mean, concurrent) = 0.02653[ms]
> Transfer Rate = 431954[Kbytes/sec]
>
> % improvement ~3.6%
>
> vmstat shows around 170K-200K interrupts per second.
>
> ~# vmstat 1 -w
> procs -----------------------memory-- - -system--
> r b swpd free in
> 5 0 0 30166724 102794
> 9 0 0 30141828 171148
> 5 0 0 30150160 207185
> 13 0 0 30145924 175691
> 15 0 0 30140792 145250
> 13 0 0 30135556 201879
> 13 0 0 30134864 192391
> 10 0 0 30133632 168880
> ....
>
> [1] https://patchwork.kernel.org/patch/9833339/
The figures certainly look convincing. I'd need someone from Cavium to
benchmark it on their hardware and come back with results so that we can
make a decision on this.
Thanks,
M.
--
Jazz is not dead. It just smells funny...