alternate queueing mechanism (was: [PATCH] unix: avoid use-after-free in ep_remove_wait_queue)
From: Rainer Weikusat
Date: Sun Nov 22 2015 - 16:44:44 EST
Rainer Weikusat <rweikusat@xxxxxxxxxxxxxxxxxxxxxxx> writes:
[AF_UNIX SOCK_DGRAM throughput]
> It may be possible to improve this by tuning/ changing the flow
> control mechanism. Out of my head, I'd suggest making the queue longer
> (the default value is 10) and delaying wake ups until the server
> actually did catch up, IOW, the receive queue is empty or almost
> empty. But this ought to be done with a different patch.
Because I was curious about the effects, I implemented this using a
slightly modified design than the one I originally suggested to account
for the different uses of the 'is the receive queue full' check. The
code uses a datagram-specific checking function,
static int unix_dgram_recvq_full(struct sock const *sk)
{
struct unix_sock *u;
u = unix_sk(sk);
if (test_bit(UNIX_DG_FULL, &u->flags))
return 1;
if (!unix_recvq_full(sk))
return 0;
__set_bit(UNIX_DG_FULL, &u->flags);
return 1;
}
which gets called instead of the other for the n:1 datagram checks and a
if (test_bit(UNIX_DG_FULL, &u->flags) &&
!skb_queue_len(&sk->sk_receive_queue)) {
__clear_bit(UNIX_DG_FULL, &u->flags);
wake_up_interruptible_sync_poll(&u->peer_wait,
POLLOUT | POLLWRNORM |
POLLWRBAND);
}
in unix_dgram_recvmsg to delay wakeups until the queued datagrams have
been consumed if the queue overflowed before. This has the additional,
nice side effect that wakeups won't ever be done for 1:1 connected
datagram sockets (both SOCK_DGRAM and SOCK_SEQPACKET) where they're of
no use, anyway.
Compared to a 'stock' 4.3 running the test program I posted (supposed to
make the overhead noticable by sending lots of small messages), the
average number of bytes sent per second increased by about 782,961.79
(ca 764.61K), about 5.32% of the 4.3 number (14,714,579.91), with a
fairly simple code change.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/