Re: [PATCH v3 2/2] riscv: introduce asm/swab.h
From: Eric Biggers
Date: Fri Apr 04 2025 - 15:28:50 EST
On Fri, Apr 04, 2025 at 04:47:52PM +0100, Ben Dooks wrote:
> On 03/04/2025 21:34, Ignacio Encinas wrote:
> > Implement endianness swap macros for RISC-V.
> >
> > Use the rev8 instruction when Zbb is available. Otherwise, rely on the
> > default mask-and-shift implementation.
> >
> > Signed-off-by: Ignacio Encinas <ignacio@xxxxxxxxxxxx>
> > ---
> > arch/riscv/include/asm/swab.h | 43 +++++++++++++++++++++++++++++++++++++++++++
> > 1 file changed, 43 insertions(+)
> >
> > diff --git a/arch/riscv/include/asm/swab.h b/arch/riscv/include/asm/swab.h
> > new file mode 100644
> > index 000000000000..7352e8405a99
> > --- /dev/null
> > +++ b/arch/riscv/include/asm/swab.h
> > @@ -0,0 +1,43 @@
> > +/* SPDX-License-Identifier: GPL-2.0-only */
> > +#ifndef _ASM_RISCV_SWAB_H
> > +#define _ASM_RISCV_SWAB_H
> > +
> > +#include <linux/types.h>
> > +#include <linux/compiler.h>
> > +#include <asm/cpufeature-macros.h>
> > +#include <asm/hwcap.h>
> > +#include <asm-generic/swab.h>
> > +
> > +#if defined(CONFIG_RISCV_ISA_ZBB) && !defined(NO_ALTERNATIVE)
> > +
> > +#define ARCH_SWAB(size) \
> > +static __always_inline unsigned long __arch_swab##size(__u##size value) \
> > +{ \
> > + unsigned long x = value; \
> > + \
> > + if (riscv_has_extension_likely(RISCV_ISA_EXT_ZBB)) { \
> > + asm volatile (".option push\n" \
> > + ".option arch,+zbb\n" \
> > + "rev8 %0, %1\n" \
> > + ".option pop\n" \
> > + : "=r" (x) : "r" (x)); \
> > + return x >> (BITS_PER_LONG - size); \
> > + } \
> > + return ___constant_swab##size(value); \
> > +}
> > +
> > +#ifdef CONFIG_64BIT
> > +ARCH_SWAB(64)
> > +#define __arch_swab64 __arch_swab64
> > +#endif
> > +
> > +ARCH_SWAB(32)
> > +#define __arch_swab32 __arch_swab32
> > +
> > +ARCH_SWAB(16)
> > +#define __arch_swab16 __arch_swab16
> > +
> > +#undef ARCH_SWAB
> > +
> > +#endif /* defined(CONFIG_RISCV_ISA_ZBB) && !defined(NO_ALTERNATIVE) */
> > +#endif /* _ASM_RISCV_SWAB_H */
> >
>
> I was having a look at this as well, using the alternatives macros.
>
> It would be nice to have a __zbb_swab defined so that you could do some
> time checks with this, because it would be interesting to see the
> benchmark of how much these improve byteswapping.
FYI if you missed the previous discussion
(https://lore.kernel.org/linux-riscv/20250302220426.GC2079@quark.localdomain/),
currently the overhead caused by the slow generic byte-swapping on RISC-V is
easily visible in the CRC benchmark. For example compare:
crc32_le_benchmark: len=16384: 2440 MB/s
to
crc32_be_benchmark: len=16384: 674 MB/s
But the main loops of crc32_le and crc32_be are basically the same, except
crc32_le does le64_to_cpu() (or le32_to_cpu()) on the data whereas crc32_be does
be64_to_cpu() (or be32_to_cpu()). The above numbers came from a little-endian
CPU, where le*_to_cpu() is a no-op and be*_to_cpu() is a byte-swap.
To reproduce this, build a kernel from the latest upstream with
CONFIG_CRC_KUNIT_TEST=y and CONFIG_CRC_BENCHMARK=y, boot it on a CPU that has
the Zbc extension, and check dmesg for the benchmark results.
This patch should mostly close the difference, though I don't currently have
hardware to confirm that myself.
- Eric