In fact, modern x86s do have dma engines these days (google for IntelYes, I'm aware that very modern x86 PCs have general purpose DMA
I/OAT), and one of our plans for vhost-net is to allow their use for
packets above a certain size. So a patch allowing vhost-net to
optionally use a dma engine is a good thing.
engines, even though I don't have any capable hardware. However, I think
it is better to support using any PC (with or without DMA engine, any
architecture) as the PCI master, and just handle the DMA all from the
PCI agent, which is known to have DMA?
Exposing a knob to userspace is not an insurmountable problem; vhost-netLet me explain the most obvious problem I ran into: setting the MAC
already allows changing the memory layout, for example.
addresses used in virtio.
On the host (PCI master), I want eth0 (virtio-net) to get a random MAC
address.
On the guest (PCI agent), I want eth0 (virtio-net) to get a specific MAC
address, aa:bb:cc:dd:ee:ff.
The virtio feature negotiation code handles this, by seeing the
VIRTIO_NET_F_MAC feature in it's configuration space. If BOTH drivers do
not have VIRTIO_NET_F_MAC set, then NEITHER will use the specified MAC
address. This is because the feature negotiation code only accepts a
feature if it is offered by both sides of the connection.
In this case, I must have the guest generate a random MAC address and
have the host put aa:bb:cc:dd:ee:ff into the guest's configuration
space. This basically means hardcoding the MAC addresses in the Linux
drivers, which is a big no-no.
What would I expose to userspace to make this situation manageable?