Re: problems with e1000 and jumboframes

From: Chris Leech
Date: Thu Aug 03 2006 - 16:32:07 EST

On 8/3/06, Arnd Hannemann <arnd@xxxxxxxxxx> wrote:
Well you say "if a single buffer per frame is going to be used". Well,
if I understood you correctly i could set the MTU to, lets say 4000.
Then the driver would enable the "jumbo frame bit" of the hardware, and
allocate only a 4k rx buffer, right? (and allocate 16k, because of
Now if a new 9k frame arrives the hardware will accept it regardless of
the 2k MTU and will split it into 3x 4k rx buffers?
Does the current driver work in this way? That would be great.

Perhaps then one should change the driver in a way that the MTU can
changed independently of the buffer size?

Yes, e1000 devices will spill over and use multiple buffers for a
single frame. We've been trying to find a good way to use multiple
buffers to take care of these allocation problems. The structure of
the sk_buff does not make it easy. Or should I say that it's the
limitation that drivers are not allowed to chain together multiple
sk_buffs to represent a single frame that does not make it easy.

PCI-Express e1000 devices support a feature called header split, where
the protocol headers go into a different buffer from the payload. We
use that today to put headers into the kmalloc() allocated skb->data
area, and payload into one or more skb->frags[] pages. You don't ever
have multiple page allocations from the driver in this mode.

We could try and only use page allocations for older e1000 devices,
putting headers and payload into skb->frags and copying the headers
out into the skb->data area as needed for processing. That would do
away with large allocations, but in Jesse's experiments calling
alloc_page() is slower than kmalloc(), so there can actually be a
performance hit from trying to use page allocations all the time.

It's an interesting problem.

- Chris
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at
Please read the FAQ at