Re: [PATCH] sunrpc: fix peername failed on closed listener

From: J. Bruce Fields
Date: Tue Jan 05 2010 - 18:00:51 EST


On Thu, Dec 31, 2009 at 10:52:36AM +0800, Xiaotian Feng wrote:
> There're some warnings of "nfsd: peername failed (err 107)!"
> socket error -107 means Transport endpoint is not connected.
> This warning message was outputed by svc_tcp_accept() [net/sunrpc/svcsock.c],
> when kernel_getpeername returns -107. This means socket might be CLOSED.
>
> And svc_tcp_accept was called by svc_recv() [net/sunrpc/svc_xprt.c]
>
> if (test_bit(XPT_LISTENER, &xprt->xpt_flags)) {
> <snip>
> newxpt = xprt->xpt_ops->xpo_accept(xprt);
> <snip>
>
> So this might happen when xprt->xpt_flags has both XPT_LISTENER and XPT_CLOSE.
>
> Let's take a look at commit b0401d72, this commit has moved the close
> processing after do recvfrom method, but this commit also introduces this
> warnings, if the xpt_flags has both XPT_LISTENER and XPT_CLOSED, we should
> close it, not accpet then close.

The logic here seems unnecessarily complicated now, but as a minimal
fix, this seems fine.

Is the *only* justification for this to silence this warning, or is
there some more serious problem I'm missing?

--b.

>
> Signed-off-by: Xiaotian Feng <dfeng@xxxxxxxxxx>
> Cc: J. Bruce Fields <bfields@xxxxxxxxxxxx>
> Cc: Neil Brown <neilb@xxxxxxx>
> Cc: Trond Myklebust <Trond.Myklebust@xxxxxxxxxx>
> Cc: David S. Miller <davem@xxxxxxxxxxxxx>
> ---
> diff --git a/net/sunrpc/svc_xprt.c b/net/sunrpc/svc_xprt.c
> index 1c924ee..187f0f4 100644
> --- a/net/sunrpc/svc_xprt.c
> +++ b/net/sunrpc/svc_xprt.c
> @@ -699,7 +699,8 @@ int svc_recv(struct svc_rqst *rqstp, long timeout)
> spin_unlock_bh(&pool->sp_lock);
>
> len = 0;
> - if (test_bit(XPT_LISTENER, &xprt->xpt_flags)) {
> + if (test_bit(XPT_LISTENER, &xprt->xpt_flags) &&
> + !test_bit(XPT_CLOSE, &xprt->xpt_flags)) {
> struct svc_xprt *newxpt;
> newxpt = xprt->xpt_ops->xpo_accept(xprt);
> if (newxpt) {
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/