Re: [PATCH] tcp: do not promote SPLICE_F_NONBLOCK to socket O_NONBLOCK
From: Evgeniy Polyakov
Date: Sat Jul 19 2008 - 04:51:32 EST
On Fri, Jul 18, 2008 at 09:43:38PM +0300, Octavian Purdila (opurdila@xxxxxxxxxxx) wrote:
> The flag gets propagated to splice_to_pipe (so there is no need to propagate
> the check in skb_splice_bits) but we don't have SPLICE_F_NONBLOCK set, we are
> on the blocking usecase.
Hmm, than how does it concerns SPLICE_F_NONBLOCK feature change? Your
patch does not touch this behaviour.
Anyway, in case of not having SPLICE_F_NONBLOCK it does not deadlock,
but waits until someone reads from the other side of the pipe. It blocks
and expects reader to unblock it.
It looks like you have two unexpected independent issues with splice:
ability to perform non-blocking reading from the socket into the pipe
when SPLICE_F_NONBLOCK is set,
and blocking waiting for reader to get data from the pipe when
SPLICE_F_NONBLOCK is not set. Is it correct?
If so, the former is a feature, which allows to have some progress when
receiving queue is empty: one can start getting data from the pipe,
i.e. splice data out of the pipe to the different file descriptor.
So, this flag means both non-blocking pipe operations _and_ non-blocking
receiving from the socket.
For the blocking in pipe_wait() when pipe is full and SPLICE_F_NONBLOCK
is not set, then it is just a pipe behaviour, which is used as a
temporal storage for the requested data. It is not some buffer, which is
returned to the userspace when it is full (or indication of it), but a
pipe, where you put page pointers, so when pipe is full and opened in
blocking mode, writer sleeps waiting or reader to get some data out of it
and thus awake writer. It is not a deadlock, but very expected behaviour
of the pipe: to block when pipe is full waiting for reader to get data.
Hope this will clarify this discussion a bit, so it would not look like
talk of blind and deaf :)
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/