RE: [PATCH net-next v5 11/19] siw: Inline do_tcp_sendpages()

From: Bernard Metzler
Date: Thu Apr 06 2023 - 11:37:30 EST




> -----Original Message-----
> From: David Howells <dhowells@xxxxxxxxxx>
> Sent: Thursday, 6 April 2023 11:43
> To: netdev@xxxxxxxxxxxxxxx
> Cc: David Howells <dhowells@xxxxxxxxxx>; David S. Miller
> <davem@xxxxxxxxxxxxx>; Eric Dumazet <edumazet@xxxxxxxxxx>; Jakub Kicinski
> <kuba@xxxxxxxxxx>; Paolo Abeni <pabeni@xxxxxxxxxx>; Willem de Bruijn
> <willemdebruijn.kernel@xxxxxxxxx>; Matthew Wilcox <willy@xxxxxxxxxxxxx>; Al
> Viro <viro@xxxxxxxxxxxxxxxxxx>; Christoph Hellwig <hch@xxxxxxxxxxxxx>; Jens
> Axboe <axboe@xxxxxxxxx>; Jeff Layton <jlayton@xxxxxxxxxx>; Christian
> Brauner <brauner@xxxxxxxxxx>; Chuck Lever III <chuck.lever@xxxxxxxxxx>;
> Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx>; linux-
> fsdevel@xxxxxxxxxxxxxxx; linux-kernel@xxxxxxxxxxxxxxx; linux-mm@xxxxxxxxx;
> Bernard Metzler <BMT@xxxxxxxxxxxxxx>; Jason Gunthorpe <jgg@xxxxxxxx>; Leon
> Romanovsky <leon@xxxxxxxxxx>; Tom Talpey <tom@xxxxxxxxxx>; linux-
> rdma@xxxxxxxxxxxxxxx
> Subject: [EXTERNAL] [PATCH net-next v5 11/19] siw: Inline
> do_tcp_sendpages()
>
> do_tcp_sendpages() is now just a small wrapper around tcp_sendmsg_locked(),
> so inline it, allowing do_tcp_sendpages() to be removed. This is part of
> replacing ->sendpage() with a call to sendmsg() with MSG_SPLICE_PAGES set.
>
> Signed-off-by: David Howells <dhowells@xxxxxxxxxx>
> cc: Bernard Metzler <bmt@xxxxxxxxxxxxxx>
> cc: Jason Gunthorpe <jgg@xxxxxxxx>
> cc: Leon Romanovsky <leon@xxxxxxxxxx>
> cc: Tom Talpey <tom@xxxxxxxxxx>
> cc: "David S. Miller" <davem@xxxxxxxxxxxxx>
> cc: Eric Dumazet <edumazet@xxxxxxxxxx>
> cc: Jakub Kicinski <kuba@xxxxxxxxxx>
> cc: Paolo Abeni <pabeni@xxxxxxxxxx>
> cc: Jens Axboe <axboe@xxxxxxxxx>
> cc: Matthew Wilcox <willy@xxxxxxxxxxxxx>
> cc: linux-rdma@xxxxxxxxxxxxxxx
> cc: netdev@xxxxxxxxxxxxxxx
> ---
> drivers/infiniband/sw/siw/siw_qp_tx.c | 17 ++++++++++++-----
> 1 file changed, 12 insertions(+), 5 deletions(-)
>
> diff --git a/drivers/infiniband/sw/siw/siw_qp_tx.c
> b/drivers/infiniband/sw/siw/siw_qp_tx.c
> index 05052b49107f..fa5de40d85d5 100644
> --- a/drivers/infiniband/sw/siw/siw_qp_tx.c
> +++ b/drivers/infiniband/sw/siw/siw_qp_tx.c
> @@ -313,7 +313,7 @@ static int siw_tx_ctrl(struct siw_iwarp_tx *c_tx,
> struct socket *s,
> }
>
> /*
> - * 0copy TCP transmit interface: Use do_tcp_sendpages.
> + * 0copy TCP transmit interface: Use MSG_SPLICE_PAGES.
> *
> * Using sendpage to push page by page appears to be less efficient
> * than using sendmsg, even if data are copied.
> @@ -324,20 +324,27 @@ static int siw_tx_ctrl(struct siw_iwarp_tx *c_tx,
> struct socket *s,
> static int siw_tcp_sendpages(struct socket *s, struct page **page, int
> offset,
> size_t size)
> {
> + struct bio_vec bvec;
> + struct msghdr msg = {
> + .msg_flags = (MSG_MORE | MSG_DONTWAIT | MSG_SENDPAGE_NOTLAST |
> + MSG_SPLICE_PAGES),
> + };
> struct sock *sk = s->sk;
> - int i = 0, rv = 0, sent = 0,
> - flags = MSG_MORE | MSG_DONTWAIT | MSG_SENDPAGE_NOTLAST;
> + int i = 0, rv = 0, sent = 0;
>
> while (size) {
> size_t bytes = min_t(size_t, PAGE_SIZE - offset, size);
>
> if (size + offset <= PAGE_SIZE)
> - flags = MSG_MORE | MSG_DONTWAIT;
> + msg.msg_flags = MSG_MORE | MSG_DONTWAIT;
>
> tcp_rate_check_app_limited(sk);
> + bvec_set_page(&bvec, page[i], bytes, offset);
> + iov_iter_bvec(&msg.msg_iter, ITER_SOURCE, &bvec, 1, size);
> +
> try_page_again:
> lock_sock(sk);
> - rv = do_tcp_sendpages(sk, page[i], offset, bytes, flags);
> + rv = tcp_sendmsg_locked(sk, &msg, size);
> release_sock(sk);
>
> if (rv > 0) {

lgtm.

Reviewd-by: Bernard Metzler <bmt@xxxxxxxxxxxxxx>