On Wed, May 05, 2010 at 10:59:51AM -0700, Avi Kivity wrote:
Date: Wed, 5 May 2010 10:59:51 -0700We don't pass the whole VF to the guest. Only the BAR which is responsible for
From: Avi Kivity<avi@xxxxxxxxxx>
To: Pankaj Thakkar<pthakkar@xxxxxxxxxx>
CC: "linux-kernel@xxxxxxxxxxxxxxx"<linux-kernel@xxxxxxxxxxxxxxx>,
"netdev@xxxxxxxxxxxxxxx"<netdev@xxxxxxxxxxxxxxx>,
"virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx"
<virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx>,
"pv-drivers@xxxxxxxxxx"<pv-drivers@xxxxxxxxxx>,
Shreyas Bhatewara<sbhatewara@xxxxxxxxxx>
Subject: Re: RFC: Network Plugin Architecture (NPA) for vmxnet3
On 05/05/2010 02:02 AM, Pankaj Thakkar wrote:
2. Hypervisor control: All control operations from the guest such as programmingIs this enforced? Since you pass the hardware through, you can't rely
MAC address go through the hypervisor layer and hence can be subjected to
hypervisor policies. The PF driver can be further used to put policy decisions
like which VLAN the guest should be on.
on the guest actually doing this, yes?
TX/RX/intr is mapped into guest space.
In NPA we do not rely on the guest OS to provide any of these services like
We have reworked our existing Linux vmxnet3 driver to accomodate NPA bySo the Shell would be the reworked or new bond driver, and Plugins would
splitting the driver into two parts: Shell and Plugin. The new split driver is
be ordinary Linux network drivers.
bonding or PCI hotplug.
We don't rely on the guest OS to unmap a VF and switch
a VM out of passthrough. In a bonding approach that becomes an issue you can't
just yank a device from underneath, you have to wait for the OS to process the
request and switch from using VF to the emulated device and this makes the
hypervisor dependent on the guest OS.
Also we don't rely on the presence of all
the drivers inside the guest OS (be it Linux or Windows), the ESX hypervisor
carries all the plugins and the PF drivers and injects the right one as needed.
These plugins are guest agnostic and the IHVs do not have to write plugins for
different OS.