vhost-net: is there a race for sock in handle_tx/rx?
From: Liu ping fan
Date: Thu May 03 2012 - 04:33:59 EST
Hi,
During reading the vhost-net code, I find the following,
static void handle_tx(struct vhost_net *net)
{
struct vhost_virtqueue *vq = &net->dev.vqs[VHOST_NET_VQ_TX];
unsigned out, in, s;
int head;
struct msghdr msg = {
.msg_name = NULL,
.msg_namelen = 0,
.msg_control = NULL,
.msg_controllen = 0,
.msg_iov = vq->iov,
.msg_flags = MSG_DONTWAIT,
};
size_t len, total_len = 0;
int err, wmem;
size_t hdr_size;
struct socket *sock;
struct vhost_ubuf_ref *uninitialized_var(ubufs);
bool zcopy;
/* TODO: check that we are running from vhost_worker? */
sock = rcu_dereference_check(vq->private_data, 1);
if (!sock)
return;
--------------------------------> Qemu calls
vhost_net_set_backend() to set a new backend fd, and close
@oldsock->file. And sock->file refcnt==0.
Can vhost_worker prevent
itself from such situation? And how?
wmem = atomic_read(&sock->sk->sk_wmem_alloc);
.........................................................................
Is it a race?
Thanks and regards,
pingfan
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/