Re: [PATCH] m68k: merge the mmu and non-mmu versions of checksum.h

From: Greg Ungerer
Date: Fri Jun 19 2009 - 02:54:58 EST

Hi Christoph,

Christoph Hellwig wrote:
On Wed, Jun 17, 2009 at 05:11:15PM +1000, Greg Ungerer wrote:
+#ifdef CONFIG_MMU
* This is a version of ip_compute_csum() optimized for IP headers,
* which always checksum on 4 octet boundaries.
@@ -59,6 +61,9 @@ static inline __sum16 ip_fast_csum(const void *iph, unsigned int ihl)
: "memory");
return (__force __sum16)~sum;
+__sum16 ip_fast_csum(const void *iph, unsigned int ihl);

Any good reason this is inline for all mmu processors and out of line
for nommu, independent of the actual cpu variant?

I don't recall of the simple (and thus non-mmu) m68k variants
support all the instructions used in this optimized version.
I will check that. It might be that this is mis-placed and
is actually conditional on the CPU type.

The C code version is significantly bigger, I think that is why
it was not inlined here (see arch/m68knommu/lib/checksum.c)

static inline __sum16 csum_fold(__wsum sum)
unsigned int tmp = (__force u32)sum;
+ tmp = (tmp & 0xffff) + (tmp >> 16);
+ tmp = (tmp & 0xffff) + (tmp >> 16);
+ return (__force __sum16)~tmp;
__asm__("swap %1\n\t"
"addw %1, %0\n\t"
"clrw %1\n\t"
@@ -74,6 +84,7 @@ static inline __sum16 csum_fold(__wsum sum)
: "=&d" (sum), "=&d" (tmp)
: "0" (sum), "1" (tmp));
return (__force __sum16)~sum;

I think this would be cleaner by having totally separate functions
for both cases, e.g.

static inline __sum16 csum_fold(__wsum sum)
unsigned int tmp = (__force u32)sum;

tmp = (tmp & 0xffff) + (tmp >> 16);
tmp = (tmp & 0xffff) + (tmp >> 16);

return (__force __sum16)~tmp;

Ok, I will change that.


Greg Ungerer -- Principal Engineer EMAIL: gerg@xxxxxxxxxxxx
SnapGear Group, McAfee PHONE: +61 7 3435 2888
825 Stanley St, FAX: +61 7 3891 3630
Woolloongabba, QLD, 4102, Australia WEB:
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at
Please read the FAQ at