On Tue, 27 Jun 2023 at 04:00, Borislav Petkov <bp@xxxxxxxxx> wrote:
- Improve csum_partial()'s performance
Honestly, looking at that patch, my reaction is "why did it get
unrolled in 64-byte chunks, if 40 bytes is the magic value"?
Particularly when there is then that "do a carry op each 32 bytes to
make 32-byte chunks independent and increase ILP". So even the 64-byte
case isn't *actuall* doing a 64-byte unrolling, it's really doing two
32-byte unrollings in parallel.
So you have three "magic" values, and the only one that really matters
is likely the 40-byte one.
Yes, yes, 64 bytes is the usual cacheline size, and is "traditional"
for unrolling. But there's nothing really magical about it here.
End result: wouldn't it have been nice to just do 40-byte chunks, and
make the 64-byte "two overlapping 32-byte chunks" be two of the
40-byte chunks.
Something like the (ENTIRELY UNTESTED!) attached patch?
Again: this is *not* tested. I took a quick look at the generated
assembly, and it looked roughly like what I expected it to look like,
but it may be complete garbage.
I added a couple of "likely()" things just because it made the
generated asm look more natural (ie it followed the order of the
source code there), they are otherwise questionable annotations.
Finally: did I already mention that this is completely untested?