On Wed, Feb 12, 2020 at 05:33:06PM +0800, Jason Wang wrote:
On 2020/2/12 äå4:53, Jason Wang wrote:
On 2020/2/12 äå4:18, Michael S. Tsirkin wrote:
On Wed, Feb 12, 2020 at 11:39:54AM +0800, Jason Wang wrote:
On 2020/2/11 äå7:33, Michael S. Tsirkin wrote:Problem is, is page size well defined enough?
On Mon, Feb 10, 2020 at 05:05:17PM +0800, Zha Bin wrote:E.g page per vq?
From: Liu Jiang<gerry@xxxxxxxxxxxxxxxxx>Hmm. Any way to make this static so we don't need
The standard virtio-mmio devices use notification register to signal
backend. This will cause vmexits and slow down the
performance when we
passthrough the virtio-mmio devices to guest virtual machines.
We proposed to update virtio over MMIO spec to add the per-queue
notify feature VIRTIO_F_MMIO_NOTIFICATION[1]. It can allow the VMM to
configure notify location for each queue.
[1]https://lkml.org/lkml/2020/1/21/31
Signed-off-by: Liu Jiang<gerry@xxxxxxxxxxxxxxxxx>
Co-developed-by: Zha Bin<zhabin@xxxxxxxxxxxxxxxxx>
Signed-off-by: Zha Bin<zhabin@xxxxxxxxxxxxxxxxx>
Co-developed-by: Jing Liu<jing2.liu@xxxxxxxxxxxxxxx>
Signed-off-by: Jing Liu<jing2.liu@xxxxxxxxxxxxxxx>
Co-developed-by: Chao Peng<chao.p.peng@xxxxxxxxxxxxxxx>
Signed-off-by: Chao Peng<chao.p.peng@xxxxxxxxxxxxxxx>
base and multiplier?
Thanks
Are there cases where guest and host page sizes differ?
I suspect there might be.
Right, so it looks better to keep base and multiplier, e.g for vDPA.
But I also think this whole patch is unproven. Is someone actually
working on QEMU code to support pass-trough of virtio-pci
as virtio-mmio for nested guests? What's the performance
gain like?
I don't know.
Thanks
Btw, I think there's no need for a nested environment to test. Current
eventfd hook to MSIX should still work for MMIO.
Thanks
Oh yes it's the wildcard thingy but how much extra performance does one get
from it with MMIO? A couple % might not be worth the trouble for MMIO.