Re: [PATCH] Revert "arm64: Increase the max granular size"
From: Catalin Marinas
Date: Thu Apr 06 2017 - 11:58:39 EST
On Thu, Apr 06, 2017 at 12:52:13PM +0530, Imran Khan wrote:
> On 4/5/2017 10:13 AM, Imran Khan wrote:
> >> We may have to revisit this logic and consider L1_CACHE_BYTES the
> >> _minimum_ of cache line sizes in arm64 systems supported by the kernel.
> >> Do you have any benchmarks on Cavium boards that would show significant
> >> degradation with 64-byte L1_CACHE_BYTES vs 128?
> >>
> >> For non-coherent DMA, the simplest is to make ARCH_DMA_MINALIGN the
> >> _maximum_ of the supported systems:
> >>
> >> diff --git a/arch/arm64/include/asm/cache.h b/arch/arm64/include/asm/cache.h
> >> index 5082b30bc2c0..4b5d7b27edaf 100644
> >> --- a/arch/arm64/include/asm/cache.h
> >> +++ b/arch/arm64/include/asm/cache.h
> >> @@ -18,17 +18,17 @@
> >>
> >> #include <asm/cachetype.h>
> >>
> >> -#define L1_CACHE_SHIFT 7
> >> +#define L1_CACHE_SHIFT 6
> >> #define L1_CACHE_BYTES (1 << L1_CACHE_SHIFT)
> >>
> >> /*
> >> * Memory returned by kmalloc() may be used for DMA, so we must make
> >> - * sure that all such allocations are cache aligned. Otherwise,
> >> - * unrelated code may cause parts of the buffer to be read into the
> >> - * cache before the transfer is done, causing old data to be seen by
> >> - * the CPU.
> >> + * sure that all such allocations are aligned to the maximum *known*
> >> + * cache line size on ARMv8 systems. Otherwise, unrelated code may cause
> >> + * parts of the buffer to be read into the cache before the transfer is
> >> + * done, causing old data to be seen by the CPU.
> >> */
> >> -#define ARCH_DMA_MINALIGN L1_CACHE_BYTES
> >> +#define ARCH_DMA_MINALIGN (128)
> >>
> >> #ifndef __ASSEMBLY__
> >>
> >> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> >> index 392c67eb9fa6..30bafca1aebf 100644
> >> --- a/arch/arm64/kernel/cpufeature.c
> >> +++ b/arch/arm64/kernel/cpufeature.c
> >> @@ -976,9 +976,9 @@ void __init setup_cpu_features(void)
> >> if (!cwg)
> >> pr_warn("No Cache Writeback Granule information, assuming
> >> cache line size %d\n",
> >> cls);
> >> - if (L1_CACHE_BYTES < cls)
> >> - pr_warn("L1_CACHE_BYTES smaller than the Cache Writeback Granule (%d < %d)\n",
> >> - L1_CACHE_BYTES, cls);
> >> + if (ARCH_DMA_MINALIGN < cls)
> >> + pr_warn("ARCH_DMA_MINALIGN smaller than the Cache Writeback Granule (%d < %d)\n",
> >> + ARCH_DMA_MINALIGN, cls);
> >> }
> >>
> >> static bool __maybe_unused
> >
> > This change was discussed at: [1] but was not concluded as apparently no one
> > came back with test report and numbers. After including this change in our
> > local kernel we are seeing significant throughput improvement. For example with:
> >
> > iperf -c 192.168.1.181 -i 1 -w 128K -t 60
> >
> > The average throughput is improving by about 30% (230Mbps from 180Mbps).
> > Could you please let us know if this change can be included in upstream kernel.
> >
> > [1]: https://groups.google.com/forum/#!topic/linux.kernel/P40yDB90ePs
>
> Could you please provide some feedback about the above mentioned query ?
Do you have an explanation on the performance variation when
L1_CACHE_BYTES is changed? We'd need to understand how the network stack
is affected by L1_CACHE_BYTES, in which context it uses it (is it for
non-coherent DMA?).
The Cavium guys haven't shown any numbers (IIUC) to back the
L1_CACHE_BYTES performance improvement but I would not revert the
original commit since ARCH_DMA_MINALIGN definitely needs to cover the
maximum available cache line size, which is 128 for them.
--
Catalin