Re: [PATCH] cache: Workaround HiSilicon Taishan DC CVAU

From: Will Deacon
Date: Mon Dec 13 2021 - 13:56:55 EST


On Fri, Nov 26, 2021 at 05:11:39PM +0800, Weilong Chen wrote:
> Taishan's L1/L2 cache is inclusive, and the data is consistent.
> Any change of L1 does not require DC operation to brush CL in L1 to L2.
> It's safe that don't clean data cache by address to point of unification.
>
> Without IDC featrue, kernel needs to flush icache as well as dcache,
> causes performance degradation.
>
> The flaw refers to V110/V200 variant 1.
>
> Signed-off-by: Weilong Chen <chenweilong@xxxxxxxxxx>
> ---
> Documentation/arm64/silicon-errata.rst | 2 ++
> arch/arm64/Kconfig | 11 +++++++++
> arch/arm64/include/asm/cputype.h | 2 ++
> arch/arm64/kernel/cpu_errata.c | 32 ++++++++++++++++++++++++++
> arch/arm64/tools/cpucaps | 1 +
> 5 files changed, 48 insertions(+)

Hmm. We don't usually apply optimisations for specific CPUs on arm64, simply
because the diversity of CPUs out there means it quickly becomes a
fragmented mess.

Is this patch purely a performance improvement? If so, please can you
provide some numbers in an attempt to justify it?

Thanks,

Will