There's no inherent performanceTrying to be backwards compatible in all dimensions is not a design
problem in pci. The vbus approach has inherent problems (the biggest of
which is compatibility
goal, as already stated.
, the second managability).
Where are the management problems?
No, you have shown me that you disagree. I'm sorry, but do not assume
they are the same.
I'm sorry, but thats just plain false.
Existing guests (Linux andSigh...please re-read "fact" section. And even if this work is accepted
Windows) which support virtio will cease to work if the host moves to
vbus-virtio.
upstream as it is, how you configure the host and guest is just that: a
configuration. If your guest and host both speak vbus, use it. If they
don't, don't use it. Simple as that. Saying anything else is just more
FUD, and I can say the same thing about a variety of other configuration
options currently available.
Existing hosts (running virtio-pci) won't be able to talk<rewind years=2>Existing hosts (running realtek emulation) won't be able
to newer guests running virtio-vbus. The patch doesn't improve
performance without the entire vbus stack in the host kernel and a
vbus-virtio-net-host host kernel driver.
to talk to newer guests running virtio-net. Virtio-net doesn't do
anything to improve realtek emulation without the entire virtio stack in
the host.</rewind>
You gotta start somewhere. You're argument buys you nothing other than
backwards compat, which I've already stated is not a specific goal here.
I am not against "modprobe vbus-pcibridge", and I am sure there are
users out that that do not object to this either.
Perhaps if you posted everything needed to make vbus-virtio work andYet, you must recognize that an alternative outcome is that we can look
perform we could compare that to vhost-net and you'll see another reason
why vhost-net is the better approach.
at issues outside of virtio-net on KVM and perhaps you will see vbus is
a better approach.
Really? Did you ever bother to ask how? I'm pretty sure you can. AndYou are also wrong to say that I didn't try to avoid creating aThere's no way we can adapt vbus to our needs.
downstream effort first. I believe the public record of the mailing
lists will back me up that I tried politely pushing this directly though
kvm first. It was only after Avi recently informed me that they would
be building their own version of an in-kernel backend in lieu of working
with me to adapt vbus to their needs that I decided to put my own
project together.
if you couldn't, I would have considered changes to make it work.
Don't you think we'd preferred it rather than writing our own?Honestly, I am not so sure based on your responses.
I've already listed numerous examples on why I advocate vbus over PCI,
and have already stated I am not competing against virtio.
Showing some of those non-virt uses, for example.Actually, Ira's chassis discussed earlier is a classic example. Vbus
actually fits neatly into his model, I believe (and much better than the
vhost proposals, IMO).
Basically, IMO we want to invert Ira's bus (so that the PPC boards see
host-based devices, instead of the other way around). You write a
connector that transports the vbus verbs over the PCI link. You write a
udev rule that responds to the PPC board "arrival" event to create a new
vbus container, and assign the board to that context.
The fact that your only user duplicates existing functionality doesn't help.Certainly at some level, that is true and is unfortunate, I agree. In
retrospect, I wish I started with something non-overlapping with virtio
as the demo, just to avoid this aspect of controversy.
At another level, its the highest-performance 802.x interface for KVM at
the moment, since we still have not seen benchmarks for vhost. Given
that I have spent a lot of time lately optimizing KVM, I can tell you
its not trivial to get it to work better than the userspace virtio.
Michael is clearly a smart guy, so the odds are in his favor. But do
not count your chickens before they hatch, because its not guaranteed
success.
Long story short, my patches are not duplicative on all levels (i.e.
performance). Its just another ethernet driver, of which there are
probably hundreds of alternatives in the kernel already. You could also
argue that we already have multiple models in qemu (realtek, e1000,
virtio-net, etc) so this is not without precedent. So really all this
"fragmentation" talk is FUD. Lets stay on-point, please.
Can we talk more about that at some point? I think you will see its not
some "evil, heavy duty" infrastructure that some comments seem to be
trying to paint it as. I think its similar in concept to what you need
to do for a vhost like design, but (with all due respect to Michael) a
little bit more thought into the necessary abstraction points to allow
broader application.
Actually, I don't think I've ever said that, per se. I said that thoseNote whenever I mention migration, large guests, or Windows you say
these are not your design requirements.
things are not a priority for me, personally. I never made a design
decision that I knew would preclude the support for such concepts. In
fact, afaict, the design would support them just fine, given resources
the develop them.
For the record: I never once said "vbus is done". There is plenty of
work left to do. This is natural (kvm I'm sure wasn't 100% when it went
in either, nor is it today)