Re: [PATCH] x86/crc32: optimize tail handling for crc32c short inputs
From: Eric Biggers
Date: Wed Mar 05 2025 - 14:16:19 EST
On Wed, Mar 05, 2025 at 02:26:53PM +0000, David Laight wrote:
> On Tue, 4 Mar 2025 13:32:16 -0800
> Eric Biggers <ebiggers@xxxxxxxxxx> wrote:
>
> > From: Eric Biggers <ebiggers@xxxxxxxxxx>
> >
> > For handling the 0 <= len < sizeof(unsigned long) bytes left at the end,
> > do a 4-2-1 step-down instead of a byte-at-a-time loop. This allows
> > taking advantage of wider CRC instructions. Note that crc32c-3way.S
> > already uses this same optimization too.
>
> An alternative is to add extra zero bytes at the start of the buffer.
> They don't affect the crc and just need the first 8 bytes shifted left.
>
> I think any non-zero 'crc-in' just needs to be xor'ed over the first
> 4 actual data bytes.
> (It's over 40 years since I did the maths of CRC.)
>
> You won't notice the misaligned accesses all down the buffer.
> When I was testing different ipcsum code misaligned buffers
> cost less than 1 clock per cache line.
> I think that was even true for the versions that managed 12 bytes
> per clock (including the one Linus committed).
>
> David
Sure, but that only works when len >= sizeof(unsigned long). Also, the initial
CRC sometimes has to be divided between two unsigned longs.
The following implements this, and you can play around with it a bit if you
want. There may be a way to optimize it a bit more.
But I think you'll find it's a bit more complex than you thought.
I think I'd like to stay with the shorter and simpler 4-2-1 step-down.
u32 crc32c_arch(u32 crc, const u8 *p, size_t len)
{
if (!static_branch_likely(&have_crc32))
return crc32c_base(crc, p, len);
if (IS_ENABLED(CONFIG_X86_64) && len >= CRC32C_PCLMUL_BREAKEVEN &&
static_branch_likely(&have_pclmulqdq) && crypto_simd_usable()) {
kernel_fpu_begin();
crc = crc32c_x86_3way(crc, p, len);
kernel_fpu_end();
return crc;
}
if (len % sizeof(unsigned long) != 0) {
unsigned long msgpoly;
u32 orig_crc = crc;
if (len < sizeof(unsigned long)) {
if (sizeof(unsigned long) > 4 && (len & 4)) {
asm("crc32l %1, %0"
: "+r" (crc) : ASM_INPUT_RM (*(u32 *)p));
p += 4;
}
if (len & 2) {
asm("crc32w %1, %0"
: "+r" (crc) : ASM_INPUT_RM (*(u16 *)p));
p += 2;
}
if (len & 1)
asm("crc32b %1, %0"
: "+r" (crc) : ASM_INPUT_RM (*p));
return crc;
}
msgpoly = (get_unaligned((unsigned long *)p) ^ orig_crc) <<
(8 * (-len % sizeof(unsigned long)));
p += len % sizeof(unsigned long);
crc = 0;
asm(CRC32_INST : "+r" (crc) : "r" (msgpoly));
msgpoly = get_unaligned((unsigned long *)p) ^
(orig_crc >> (8 * (len % sizeof(unsigned long))));
p += sizeof(unsigned long);
len -= (len % sizeof(unsigned long)) + sizeof(unsigned long);
asm(CRC32_INST : "+r" (crc) : "r" (msgpoly));
}
for (len /= sizeof(unsigned long); len != 0;
len--, p += sizeof(unsigned long))
asm(CRC32_INST : "+r" (crc) : ASM_INPUT_RM (*(unsigned long *)p));
return crc;
}