The problem isn't where to find the models...the problem is how to
aggregate multiple models to the guest.
You instantiate multiple vhost-nets. Multiple ethernet NICs is aBut this is not KVM.
supported configuration for kvm.
A KVM surfaces N virtio-devices as N pci-devices to the guest. What doHis slave boards surface themselves as PCI devices to the x86I don't really see the difference between 1 and N here.
host. So how do you use that to make multiple vhost-based devices (say
two virtio-nets, and a virtio-console) communicate across the transport?
we do in Ira's case where the entire guest represents itself as a PCI
device to the host, and nothing the other way around?
I am talking about how we would tunnel the config space for N devicesThere are multiple ways to do this, but what I am saying is thatI'm not sure if you're talking about the configuration interface or data
whatever is conceived will start to look eerily like a vbus-connector,
since this is one of its primary purposes ;)
path here.
across his transport.
They aren't in the "guest". The best way to look at it isThat sounds convenient given his hardware, but it has its own set of
- a device side, with a dma engine: vhost-net
- a driver side, only accessing its own memory: virtio-net
Given that Ira's config has the dma engine in the ppc boards, that's
where vhost-net would live (the ppc boards acting as NICs to the x86
board, essentially).
problems. For one, the configuration/inventory of these boards is now
driven by the wrong side and has to be addressed.
Second, the role
reversal will likely not work for many models other than ethernet (e.g.
virtio-console or virtio-blk drivers running on the x86 board would be
naturally consuming services from the slave boards...virtio-net is an
exception because 802.x is generally symmetrical).
I have no idea, that's for Ira to solve.Bingo. Thus my statement that the vhost proposal is incomplete. You
have the virtio-net and vhost-net pieces covering the fast-path
end-points, but nothing in the middle (transport, aggregation,
config-space), and nothing on the management-side. vbus provides most
of the other pieces, and can even support the same virtio-net protocol
on top. The remaining part would be something like a udev script to
populate the vbus with devices on board-insert events.
If he could fake the PCIRight, and note that vbus was designed to solve this. This tunneling
config space as seen by the x86 board, he would just show the normal pci
config and use virtio-pci (multiple channels would show up as a
multifunction device). Given he can't, he needs to tunnel the virtio
config space some other way.
can, of course, be done without vbus using some other design. However,
whatever solution is created will look incredibly close to what I've
already done, so my point is "why reinvent it"?