Re: Data corruption issue with splice() on 2.6.27.10

From: Willy Tarreau
Date: Wed Jan 07 2009 - 07:52:29 EST


On Wed, Jan 07, 2009 at 03:40:34PM +0300, Evgeniy Polyakov wrote:
> On Wed, Jan 07, 2009 at 01:35:04PM +0100, Jens Axboe (jens.axboe@xxxxxxxxxx) wrote:
> > Irregardless of that particular oddity, I don't think this is the right
> > path to take at all. We need to delay the pipe buffer consumption until
> > the appropriate time.
>
> As a proof of concept we can put a delayed work_struct into the buffer
> and only release its content after some timeout big enough (like one
> second or so) for the hardware to actually transmit its buffers.

Evgeniy, I'd like to understand something related to our apparent lack of
knowledge of when the data is effectively transmitted. If we're focusing
on the send part, I can't understand why I never reproduce the corruption
when the data source is a file or loopback, but I only see it when the source
is an ethernet interface. How is it possible that a problem affecting only
the send side is so much selective about the source ? And in fact, why can't
we apply the same workflow for outgoing data for both types of sources ? It
seems to me that the page is released at the right time when sending a file,
and I don't see why we cannot apply the same principle when splicing between
sockets.

Please excuse me for my blattant ignorance in this area, as I once said, I
could not completely follow the whole splice process between tcp_splice_read()
and the moment the data leaves the machine. Also, I failed to understand what
linear data means. It seems to me this is the parts that are memcpy'd, but I'm
not sure.

Thanks,
Willy

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/