2) the vbus-proxy and kvm-guest patch go awayDon't userstand. What's this MSI shim?
3) the kvm-host patch changes to work with coordination from the
userspace-pci emulation for things like MSI routing
4) qemu will know to create some MSI shim 1:1 with whatever it
instantiates on the bus (and can communicate changes
Well, if the device model was an object in vbus down in the kernel, yet
PCI emulation was up in qemu, presumably we would want something to
handle things like PCI config-cycles up in userspace. Like, for
instance, if the guest re-routes the MSI. The shim/proxy would handle
the config-cycle, and then turn around and do an ioctl to the kernel to
configure the change with the in-kernel device model (or the irq
infrastructure, as required).
But, TBH, I haven't really looked into whats actually required to make
this work yet. I am just spitballing to try to find a compromise.
No, you are confusing the front-end and back-end again ;)
The back-end remains, and holds the device models as before. This is
the "vbus core". Today the front-end interacts with the hypervisor to
render "vbus" specific devices. The proposal is to eliminate the
front-end, and have the back end render the objects on the bus as PCI
devices to the guest. I am not sure if I can make it work, yet. It
needs more thought.
. I don't think you've yet set down what its advantages are. Being
pure and clean doesn't count, unless you rip out PCI from all existing
installed hardware and from Windows.
You are being overly dramatic. No one has ever said we are talking
about ripping something out. In fact, I've explicitly stated that PCI
can coexist peacefully. Having more than one bus in a system is
certainly not without precedent (PCI, scsi, usb, etc).
Rather, PCI is PCI, and will always be. PCI was designed as a
software-to-hardware interface. It works well for its intention. When
we do full emulation of guests, we still do PCI so that all that
software that was designed to work software-to-hardware still continue
to work, even though technically its now software-to-software. When we
do PV, on the other hand, we no longer need to pretend it is
software-to-hardware. We can continue to use an interface designed for
software-to-hardware if we choose, or we can use something else such as
an interface designed specifically for software-to-software.
As I have stated, PCI was designed with hardware constraints in mind. What if I don't want to be governed by those constraints?
What if I
don't want an interrupt per device (I don't)?
What do I need BARs for
(I don't)?
Is a PCI PIO address relevant to me (no, hypercalls are more
direct)? Etc. Its crap I dont need.
All I really need is a way to a) discover and enumerate devices,
preferably dynamically (hotswap), and b) a way to communicate with those
devices. I think you are overstating the the importance that PCI plays
in (a), and are overstating the complexity associated with doing an
alternative.
I think you are understating the level of hackiness
required to continue to support PCI as we move to new paradigms, like
in-kernel models.
And I think I have already stated that I can
establish a higher degree of flexibility, and arguably, performance for
(b).