Re: [PATCH 05/17] Blackfin: convert to generic checksum code
From: Mike Frysinger
Date: Fri Jun 19 2009 - 06:43:31 EST
On Fri, Jun 19, 2009 at 05:05, Arnd Bergmann wrote:
> x86 and blackfin are both little-endian, so your variant is correct
> there. They add the 0x01 to the low byte of the 16-bit word, while
> on big-endian machines, you have to add it to the high byte.
can we think of enough simple examples to through together an optional
boot-time self test ?
> I've committed the patch below now.
closer, but not quite just yet ...
do_csum: mismatch 0x1700 != 0xa0d len:2
do_csum: { 0xd, 0xa }
do_csum: mismatch 0x6f76 != 0x4f96 len:255
do_csum: { 0x20, 0x34, 0x34, 0x34, 0x20, 0x53, 0x20, 0x20, 0x20,
0x20, 0x2d, 0x2f, 0x62, 0x69, 0x6e, 0x2f, 0x73, 0x68, 0x20, 0xd, 0xa,
0x20, 0x20, 0x31, 0x30, 0x35, 0x20, 0x72, 0x6f, 0x6f, 0x74, 0x20,
0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x34, 0x34, 0x20, 0x53,
0x20, 0x20, 0x20, 0x20, 0x2f, 0x73, 0x62, 0x69, 0x6e, 0x2f, 0x69,
0x6e, 0x65, 0x74, 0x64, 0x20, 0xd, 0xa, 0x20, 0x20, 0x31, 0x30, 0x36,
0x20, 0x72, 0x6f, 0x6f, 0x74, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20,
0x20, 0x34, 0x33, 0x36, 0x20, 0x53, 0x20, 0x20, 0x20, 0x20, 0x2f,
0x73, 0x62, 0x69, 0x6e, 0x2f, 0x73, 0x79, 0x73, 0x6c, 0x6f, 0x67,
0x64, 0x20, 0x2d, 0x6e, 0x20, 0xd, 0xa, 0x20, 0x20, 0x31, 0x30, 0x37,
0x20, 0x72, 0x6f, 0x6f, 0x74, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20,
0x20, 0x34, 0x33, 0x32, 0x20, 0x52, 0x20, 0x20, 0x20, 0x20, 0x2f,
0x73, 0x62, 0x69, 0x6e, 0x2f, 0x6b, 0x6c, 0x6f, 0x67, 0x64, 0x20,
0x2d, 0x6e, 0x20, 0xd, 0xa, 0x20, 0x20, 0x31, 0x30, 0x38, 0x20, 0x72,
0x6f, 0x6f, 0x74, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20,
0x33, 0x32, 0x20, 0x53, 0x20, 0x20, 0x20, 0x20, 0x2f, 0x62, 0x69,
0x6e, 0x2f, 0x77, 0x61, 0x7
4, 0x63, 0x68, 0x64, 0x6f, 0x67, 0x64, 0x20, 0x2d, 0x66, 0x20, 0x2d,
0x73, 0x20, 0xd, 0xa, 0x20, 0x20, 0x31, 0x30, 0x39, 0x20, 0x72, 0x6f,
0x6f, 0x74, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x35,
0x36, 0x20, 0x52, 0x20, 0x20, 0x20, 0x20, 0x2f, 0x62, 0x69, 0x6e,
0x2f, 0x74, 0x65, 0x6c, 0x6e, 0x65, 0x74, 0x64, 0x20, 0xd, 0xa, 0x20,
0x20, 0x31, 0x31, 0x30, 0x20, 0x72, 0x6f, 0x6f, 0x74, 0x20, 0x20,
0x20, 0x20, 0x20, 0x20 }
when do_csum does return correct values, csum_partial sometimes does not:
csum_partial: mismatch 0x1101eefe != 0xffff len:32
csum_partial: { 0x92, 0x3b, 0x0, 0x17, 0xcf, 0xc1, 0x90, 0xec, 0x1c,
0x3f, 0xff, 0x99, 0x80, 0x10, 0x0, 0x5c, 0x22, 0xfa, 0x0, 0x0, 0x1,
0x1, 0x8, 0xa, 0x6, 0x83, 0xdd, 0x50, 0xff, 0xfe, 0xdf, 0x58 }
btw, would it make sense to change do_csum like so:
-static unsigned int do_csum(const unsigned char *buff, int len)
+__weak unsigned int do_csum(const unsigned char *buff, int len)
after all, it looks like that's really the function most arches would
want to hand optimize. all the rest are simple standard wrappers are
do_csum(). forcing an arch to copy & paste all the other functions
just so they can implement an optimized do_csum() seems pretty
unfriendly. plus, it isnt like being static here makes any optimized
difference -- it isnt going to get inlined anywhere in this file.
or rather than weaks (in case some toolchains force indirect pointer
loading all the time), use the same interface as the rest of the
generic checksum code:
#ifndef do_csum
static inline unsigned short from32to16(unsigned long x)
{
...
}
static unsigned int do_csum(const unsigned char *buff, int len)
{
...
}
#endif
-mike
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/