Note: No one has ever proposed to change the virtio-ABI. In fact, this
thread in question doesn't even touch virtio, and even the patches that
I have previously posted to add virtio-capability do it in a backwards
compatible way
Case in point: Take an upstream kernel and you can modprobe the
vbus-pcibridge in and virtio devices will work over that transport
unmodified.
vbusThats fine. KVM can stick with virtio-pci if it wants. AlacrityVM will
does things in different ways (paravirtual bus vs. pci for discovery)
but I think we're happy with how virtio does things today.
support virtio-pci and vbus (with possible convergence with
virtio-vbus). If at some point KVM thinks vbus is interesting, I will
gladly work with getting it integrated into upstream KVM as well. Until
then, they can happily coexist without issue between the two projects.
I think the reason vbus gets better performance for networking today isWell, with all due respect, you also said initially when I announced
that vbus' backends are in the kernel while virtio's backends are
currently in userspace.
vbus that in-kernel doesn't matter, and tried to make virtio-net run as
fast as venet from userspace ;) Given that we never saw those userspace
patches from you that in fact equaled my performance, I assume you were
wrong about that statement.
Perhaps you were wrong about other things too?
Since Michael has a functioning in-kernelThis is not entirely impossible, at least for certain simple benchmarks
backend for virtio-net now, I suspect we're weeks (maybe days) away from
performance results. My expectation is that vhost + virtio-net will be
as good as venet + vbus.
like singleton throughput and latency.
But if you think that this
somehow invalidates vbus as a concept, you have missed the point entirely.
vbus is about creating a flexible (e.g. cross hypervisor, and even
physical system or userspace application) in-kernel IO containers with
linux. The "guest" interface represents what I believe to be the ideal
interface for ease of use, yet maximum performance for
software-to-software interaction.
venet was originally crafted just to validate the approach and test the
vbus interface. It ended up being so much faster that virtio-net, that
people in the vbus community started coding against its ABI.
OTOH, Michael's patch is purely targeted at improving virtio-net on kvm,
and its likewise constrained by various limitations of that decision
(such as its reliance of the PCI model, and the kvm memory scheme). The
tradeoff is that his approach will work in all existing virtio-net kvm
guests, and is probably significantly less code since he can re-use the
qemu PCI bus model.
Conversely, I am not afraid of requiring a new driver to optimize the
general PV interface. In the long term, this will reduce the amount of
reimplementing the same code over and over, reduce system overhead, and
it adds new features not previously available (for instance, coalescing
and prioritizing interrupts).
If that's the case, then I don't see anyAside from the fact that this is another confusion of the vbus/virtio
reason to adopt vbus unless Greg things there are other compelling
features over virtio.
relationship...yes, of course there are compelling features (IMHO) or I
wouldn't be expending effort ;) They are at least compelling enough to
put in AlacrityVM. If upstream KVM doesn't want them, that's KVMs
decision and I am fine with that. Simply never apply my qemu patches to
qemu-kvm.git, and KVM will be blissfully unaware if vbus is present. I
do hope that I can convince the KVM community otherwise, however. :)