Re: [PATCH v9] tilegx network driver: initial support

From: Ben Hutchings
Date: Wed Jun 06 2012 - 14:19:48 EST


On Wed, 2012-06-06 at 20:10 +0200, Eric Dumazet wrote:
> On Mon, 2012-06-04 at 16:12 -0400, Chris Metcalf wrote:
> > This change adds support for the tilegx network driver based on the
> > GXIO IORPC support in the tilegx software stack, using the on-chip
> > mPIPE packet processing engine.
> >
>
> > +
> > +/* Do "TSO" handling for egress.
> > + *
> > + * Normally drivers set NETIF_F_TSO only to support hardware TSO;
> > + * otherwise the stack uses scatter-gather to implement GSO in software.
> > + * On our testing, enabling GSO support (via NETIF_F_SG) drops network
> > + * performance down to around 7.5 Gbps on the 10G interfaces, although
> > + * also dropping cpu utilization way down, to under 8%. But
> > + * implementing "TSO" in the driver brings performance back up to line
> > + * rate, while dropping cpu usage even further, to less than 4%. In
> > + * practice, profiling of GSO shows that skb_segment() is what causes
> > + * the performance overheads; we benefit in the driver from using
> > + * preallocated memory to duplicate the TCP/IP headers.
> > + */
>
> All this stuff cost about 300 lines of code in this driver, without IPv6
> support.
>
> I am pretty sure this performance problem should be solved in net/{core|
> ipv4|ipv6} instead
>
> What TCP performance do you get with TSO/GSO and SG off ?

It's a real problem and we have soft-TSO in the sfc driver for the same
reason. GSO means more allocation, more DMA mapping, more calls into
the driver and more register writes.

If drivers could use GSO explicitly from their ndo_start_xmit function,
more like they do with GRO, much of this would presumably be avoidable.

Ben.

--
Ben Hutchings, Staff Engineer, Solarflare
Not speaking for my employer; that's the marketing department's job.
They asked us to note that Solarflare product names are trademarked.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/