Re: problems with e1000 and jumboframes

From: Arnd Hannemann
Date: Thu Aug 03 2006 - 17:38:06 EST


Evgeniy Polyakov wrote:
> On Thu, Aug 03, 2006 at 08:09:07PM +0200, Arnd Hannemann (arnd@xxxxxxxxxx) wrote:
>> Evgeniy Polyakov schrieb:
>>> On Thu, Aug 03, 2006 at 07:16:31PM +0400, Evgeniy Polyakov (johnpol@xxxxxxxxxxx) wrote:
>>>>>> then skb_alloc adds a little
>>>>>> (sizeof(struct skb_shared_info)) at the end, and this ends up
>>>>>> in 32k request just for 9k jumbo frame.
>>>>> Strange, why this skb_shared_info cannon be added before first alignment?
>>>>> And what about smaller frames like 1500, does this driver behave similar
>>>>> (first align then add)?
>>>> It can be.
>>>> Could attached (completely untested) patch help?
>>> Actually this patch will not help, this new one could.
>>>
>> I applied the attached pachted. And got this output:
>>
>>> Intel(R) PRO/1000 Network Driver - bufsz 13762
>>> ...

>> I'm a bit puzzled that there are so much allocations. However the patch
>> seems to work. (at least not obviously breaks things for me yet)
>
> Very strange output actually - comments in the code say that frame size
> can not exceed 0x3f00, but in this log it is much more than 16128 and
> that is after sizeof(struct skb_shared_info) has been removed...
> Could you please remove debug output and run some network stress test in
> parallel with high disk/memory activity to check if that does not break
> your system and watch /proc/slabinfo for 16k and 32k sized pools.

The system seems to be still stable.

>From /proc/slabinfo during netio test:
> size-32768(DMA) 0 0 32768 1 8 : tunables 8 4 0 : slabdata 0 0 0
> size-32768 84 89 32768 1 8 : tunables 8 4 0 : slabdata 84 89 0
> size-16384(DMA) 0 0 16384 1 4 : tunables 8 4 0 : slabdata 0 0 0
> size-16384 184 188 16384 1 4 : tunables 8 4 0 : slabdata 184 188 0

Netio results:

NETIO - Network Throughput Benchmark, Version 1.26
(C) 1997-2005 Kai Uwe Rommel

TCP connection established.
Packet size 1k bytes: 72320 KByte/s Tx, 86656 KByte/s Rx.
Packet size 2k bytes: 71400 KByte/s Tx, 94703 KByte/s Rx.
Packet size 4k bytes: 71544 KByte/s Tx, 88463 KByte/s Rx.
Packet size 8k bytes: 70392 KByte/s Tx, 92127 KByte/s Rx.
Packet size 16k bytes: 70512 KByte/s Tx, 102607 KByte/s Rx.
Packet size 32k bytes: 71705 KByte/s Tx, 101083 KByte/s Rx.
Done.

Strange ist that receiving seems to be much faster than transmitting.


> --
> Evgeniy Polyakov

Thanks,
Arnd



-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/