Re: [PATCH] vhost/vsock: Use kvmalloc/kvfree for larger packets.
From: Michael S. Tsirkin
Date: Thu Sep 29 2022 - 03:49:47 EST
On Thu, Sep 29, 2022 at 09:46:06AM +0200, Stefano Garzarella wrote:
> On Thu, Sep 29, 2022 at 03:19:14AM -0400, Michael S. Tsirkin wrote:
> > On Thu, Sep 29, 2022 at 08:14:24AM +0900, Junichi Uekawa (上川純一) wrote:
> > > 2022年9月29日(木) 0:11 Stefano Garzarella <sgarzare@xxxxxxxxxx>:
> > > >
> > > > On Wed, Sep 28, 2022 at 05:31:58AM -0400, Michael S. Tsirkin wrote:
> > > > >On Wed, Sep 28, 2022 at 10:28:23AM +0200, Stefano Garzarella wrote:
> > > > >> On Wed, Sep 28, 2022 at 03:45:38PM +0900, Junichi Uekawa wrote:
> > > > >> > When copying a large file over sftp over vsock, data size is usually 32kB,
> > > > >> > and kmalloc seems to fail to try to allocate 32 32kB regions.
> > > > >> >
> > > > >> > Call Trace:
> > > > >> > [<ffffffffb6a0df64>] dump_stack+0x97/0xdb
> > > > >> > [<ffffffffb68d6aed>] warn_alloc_failed+0x10f/0x138
> > > > >> > [<ffffffffb68d868a>] ? __alloc_pages_direct_compact+0x38/0xc8
> > > > >> > [<ffffffffb664619f>] __alloc_pages_nodemask+0x84c/0x90d
> > > > >> > [<ffffffffb6646e56>] alloc_kmem_pages+0x17/0x19
> > > > >> > [<ffffffffb6653a26>] kmalloc_order_trace+0x2b/0xdb
> > > > >> > [<ffffffffb66682f3>] __kmalloc+0x177/0x1f7
> > > > >> > [<ffffffffb66e0d94>] ? copy_from_iter+0x8d/0x31d
> > > > >> > [<ffffffffc0689ab7>] vhost_vsock_handle_tx_kick+0x1fa/0x301 [vhost_vsock]
> > > > >> > [<ffffffffc06828d9>] vhost_worker+0xf7/0x157 [vhost]
> > > > >> > [<ffffffffb683ddce>] kthread+0xfd/0x105
> > > > >> > [<ffffffffc06827e2>] ? vhost_dev_set_owner+0x22e/0x22e [vhost]
> > > > >> > [<ffffffffb683dcd1>] ? flush_kthread_worker+0xf3/0xf3
> > > > >> > [<ffffffffb6eb332e>] ret_from_fork+0x4e/0x80
> > > > >> > [<ffffffffb683dcd1>] ? flush_kthread_worker+0xf3/0xf3
> > > > >> >
> > > > >> > Work around by doing kvmalloc instead.
> > > > >> >
> > > > >> > Signed-off-by: Junichi Uekawa <uekawa@xxxxxxxxxxxx>
> > > > >
> > > > >My worry here is that this in more of a work around.
> > > > >It would be better to not allocate memory so aggressively:
> > > > >if we are so short on memory we should probably process
> > > > >packets one at a time. Is that very hard to implement?
> > > >
> > > > Currently the "virtio_vsock_pkt" is allocated in the "handle_kick"
> > > > callback of TX virtqueue. Then the packet is multiplexed on the right
> > > > socket queue, then the user space can de-queue it whenever they want.
> > > >
> > > > So maybe we can stop processing the virtqueue if we are short on memory,
> > > > but when can we restart the TX virtqueue processing?
> > > >
> > > > I think as long as the guest used only 4K buffers we had no problem, but
> > > > now that it can create larger buffers the host may not be able to
> > > > allocate it contiguously. Since there is no need to have them contiguous
> > > > here, I think this patch is okay.
> > > >
> > > > However, if we switch to sk_buff (as Bobby is already doing), maybe we
> > > > don't have this problem because I think there is some kind of
> > > > pre-allocated pool.
> > > >
> > >
> > > Thank you for the review! I was wondering if this is a reasonable workaround (as
> > > we found that this patch makes a reliably crashing system into a
> > > reliably surviving system.)
> > >
> > >
> > > ... Sounds like it is a reasonable patch to use backported to older kernels?
> >
> > Hmm. Good point about stable. OK.
>
> Right, so in this case I think is better to add a Fixes tag. Since we used
> kmalloc from the beginning we can use the following:
>
> Fixes: 433fc58e6bf2 ("VSOCK: Introduce vhost_vsock.ko")
>
> >
> > Acked-by: Michael S. Tsirkin <mst@xxxxxxxxxx>
> >
>
> @Michael are you queueing this, or should it go through net tree?
>
> Thanks,
> Stefano
net tree would be preferable, my pull for this release is kind of ready ... kuba?
--
MST