On Tue, May 25, 2021 at 09:22:48AM +0200, Paolo Bonzini wrote:
On 24/05/21 16:59, Christoph Hellwig wrote:
On Thu, May 20, 2021 at 03:13:05PM +0100, Stefan Hajnoczi wrote:
Possible drawbacks of this approach:
- Hardware virtio_blk implementations may find virtqueue_disable_cb()
expensive since it requires DMA. If such devices become popular then
the virtio_blk driver could use a similar approach to NVMe when
VIRTIO_F_ACCESS_PLATFORM is detected in the future.
- If a blk_poll() thread is descheduled it not only hurts polling
performance but also delays completion of non-REQ_HIPRI requests on
that virtqueue since vq notifications are disabled.
Yes, I think this is a dangerous configuration. What argument exists
again just using dedicated poll queues?
There isn't an equivalent of the admin queue in virtio-blk, which would
allow the guest to configure the desired number of poll queues. The number
of queues is fixed.
Dedicated vqs can be used for poll only, and I understand VM needn't to know
if the vq is polled or driven by IRQ in VM.
I tried that in v5.4, but not see obvious IOPS boost, so give up.
https://github.com/ming1/linux/commits/my_v5.4-virtio-irq-poll