[PATCH 3.2 101/200] SUNRPC: Fix races in xs_nospace()
From: Ben Hutchings
Date: Sun Mar 30 2014 - 20:16:51 EST
3.2.56-rc1 review patch. If anyone has any objections, please let me know.
------------------
From: Trond Myklebust <trond.myklebust@xxxxxxxxxxxxxxx>
commit 06ea0bfe6e6043cb56a78935a19f6f8ebc636226 upstream.
When a send failure occurs due to the socket being out of buffer space,
we call xs_nospace() in order to have the RPC task wait until the
socket has drained enough to make it worth while trying again.
The current patch fixes a race in which the socket is drained before
we get round to setting up the machinery in xs_nospace(), and which
is reported to cause hangs.
Link: http://lkml.kernel.org/r/20140210170315.33dfc621@xxxxxxxxxxxxxx
Fixes: a9a6b52ee1ba (SUNRPC: Don't start the retransmission timer...)
Reported-by: Neil Brown <neilb@xxxxxxxx>
Signed-off-by: Trond Myklebust <trond.myklebust@xxxxxxxxxxxxxxx>
Signed-off-by: Ben Hutchings <ben@xxxxxxxxxxxxxxx>
---
net/sunrpc/xprtsock.c | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
--- a/net/sunrpc/xprtsock.c
+++ b/net/sunrpc/xprtsock.c
@@ -501,6 +501,7 @@ static int xs_nospace(struct rpc_task *t
struct rpc_rqst *req = task->tk_rqstp;
struct rpc_xprt *xprt = req->rq_xprt;
struct sock_xprt *transport = container_of(xprt, struct sock_xprt, xprt);
+ struct sock *sk = transport->inet;
int ret = -EAGAIN;
dprintk("RPC: %5u xmit incomplete (%u left of %u)\n",
@@ -518,7 +519,7 @@ static int xs_nospace(struct rpc_task *t
* window size
*/
set_bit(SOCK_NOSPACE, &transport->sock->flags);
- transport->inet->sk_write_pending++;
+ sk->sk_write_pending++;
/* ...and wait for more buffer space */
xprt_wait_for_buffer_space(task, xs_nospace_callback);
}
@@ -528,6 +529,9 @@ static int xs_nospace(struct rpc_task *t
}
spin_unlock_bh(&xprt->transport_lock);
+
+ /* Race breaker in case memory is freed before above code is called */
+ sk->sk_write_space(sk);
return ret;
}
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/