Avi Kivity wrote:
On 09/16/2009 05:10 PM, Gregory Haskins wrote:3) "in-kernel": You can do something like virtio-net to vhost to
In the kernel. IMO that's the wrong place for it.If kvm can do it, others can.The problem is that you seem to either hand-wave over details like this,
or you give details that are pretty much exactly what vbus does already.
My point is that I've already sat down and thought about these issues
and solved them in a freely available GPL'ed software package.
potentially meet some of the requirements, but not all.
In order to fully meet (3), you would need to do some of that stuff you
mentioned in the last reply with muxing device-nr/reg-nr. In addition,
we need to have a facility for mapping eventfds and establishing a
signaling mechanism (like PIO+qid), etc. KVM does this with
IRQFD/IOEVENTFD, but we dont have KVM in this case so it needs to be
invented.
To meet performance, this stuff has to be in kernel and there has to be
a way to manage it.
Since vbus was designed to do exactly that, this is
what I would advocate. You could also reinvent these concepts and put
your own mux and mapping code in place, in addition to all the other
stuff that vbus does. But I am not clear why anyone would want to.
So no, the kernel is not the wrong place for it. Its the _only_ place
for it. Otherwise, just use (1) and be done with it.
Further, if we adoptWe already need to support both (at least to support Ira). virtio-pci
vbus, if drop compatibility with existing guests or have to support both
vbus and virtio-pci.
doesn't work here. Something else (vbus, or vbus-like) is needed.
I think that about sums it up, then.So the question is: is your position that vbus is all wrong and you wishI don't intend to create anything new, I am satisfied with virtio. If
to create a new bus-like thing to solve the problem?
it works for Ira, excellent. If not, too bad.
As it needs to be.If so, how is itThe two biggest objections are:
different from what Ive already done? More importantly, what specific
objections do you have to what Ive done, as perhaps they can be fixed
instead of starting over?
- the host side is in the kernel
With all due respect, based on all of your comments in aggregate I
really do not think you are truly grasping what I am actually building here.
Bingo!Bingo. So now its a question of do you want to write this layer fromYou will have to implement a connector or whatever for vbus as well.
scratch, or re-use my framework.
vbus has more layers so it's probably smaller for vbus.
That is precisely the point.
All the stuff for how to map eventfds, handle signal mitigation, demux
device/function pointers, isolation, etc, are built in. All the
connector has to do is transport the 4-6 verbs and provide a memory
mapping/copy function, and the rest is reusable. The device models
would then work in all environments unmodified, and likewise the
connectors could use all device-models unmodified.
It was already implemented three times for virtio, so apparently that'sAnd to my point, I'm trying to commoditize as much of that process as
extensible too.
possible on both the front and backends (at least for cases where
performance matters) so that you don't need to reinvent the wheel for
each one.
You mean, if the x86 board was able to access the disks and dma into theBut as we discussed, vhost doesn't work well if you try to run it on the
ppb boards memory? You'd run vhost-blk on x86 and virtio-net on ppc.
x86 side due to its assumptions about pagable "guest" memory, right? So
is that even an option? And even still, you would still need to solve
the aggregation problem so that multiple devices can coexist.