Re: Increased memory usage with scsi-mq
From: Richard W.M. Jones
Date: Thu Aug 10 2017 - 08:22:41 EST
On Wed, Aug 09, 2017 at 06:50:10PM +0200, Paolo Bonzini wrote:
> On 09/08/2017 18:01, Christoph Hellwig wrote:
> > On Mon, Aug 07, 2017 at 03:07:48PM +0200, Paolo Bonzini wrote:
> >> can_queue should depend on the virtqueue size, which unfortunately can
> >> vary for each virtio-scsi device in theory. The virtqueue size is
> >> retrieved by drivers/virtio prior to calling vring_create_virtqueue, and
> >> in QEMU it is the second argument to virtio_add_queue.
> >
> > Why is that unfortunate? We don't even have to set can_queue in
> > the host template, we can dynamically set it on per-host basis.
>
> Ah, cool, I thought allocations based on can_queue happened already in
> scsi_host_alloc, but they happen at scsi_add_host time.
I think I've decoded all that information into the patch below.
I tested it, and it appears to work: when I set cmd_per_lun on the
qemu command line, I see that the guest can add more disks:
With scsi-mq enabled: 175 disks
cmd_per_lun not set: 177 disks *
cmd_per_lun=16: 776 disks *
cmd_per_lun=4: 1160 disks *
With scsi-mq disabled: 1755 disks
* = new result
>From my point of view, this is a good result, but you should be warned
that I don't fully understand what's going on here and I may have made
obvious or not-so-obvious mistakes.
I tested the performance impact and it's not noticable in the
libguestfs case even with very small cmd_per_lun settings, but
libguestfs is largely serial and so this result won't be applicable to
guests in general.
Also, should the guest kernel validate cmd_per_lun to make sure it's
not too small or large? And if so, what would the limits be?
Rich.