On Sun, 17 Jun 2001, Dan Podeanu wrote:
> Is there any logical reason why if, given fd is a connected, AF_INET,
> SOCK_STREAM socket, and one does a write(fd, buffer, len); close(fd);
> to the peer, over a rather slow network (read modem, satelite link, etc),
> the data gets lost (the remote receives the disconnect before the last
> packet). According to socket(7), even if SO_LINGER is not set, the data
> is flushed in the background.
suppose A writes B, and A closes its fd. suppose that B writes to A and
it arrives at A before the A->B data has left the buffer. in that case A
will RST and drop the data in its buffer, so you've lost the data you
thought you had transmitted.
> Is it Linux or TCP specific? Or some obvious techincal detail I'm missing?
it's TCP.
the linux specific behaviour (and the recommended behaviour now) is that
above if the B->A traffic arrived before the close, but wasn't read by the
application on A, then the RST will be sent immediately. this generally
results in folks discovering their broken applications much earlier than
on other stacks. it's basically a race condition as to when the B->A data
arrives.
the above is the reason apache uses at least 4 system calls to tear down a
connection... with http/1.1 and pipelining it's totally valid for the B->A
traffic to be sent regardless of what's happening in the A->B direction.
(ditto for multiplexed protocols... and to some extent SMTP pipelining.)
-dean
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
This archive was generated by hypermail 2b29 : Sat Jun 23 2001 - 21:00:19 EST