On 2015-09-16 04:36, Wanpeng Li wrote:
On 9/16/15 1:32 AM, Jan Kiszka wrote:Ah, I see, you limit allocation to one additional host-side vpid per
On 2015-09-15 12:14, Wanpeng Li wrote:In v2, there is no direct mapping between vpid02 and vpid12, the vpid02
On 9/14/15 10:54 PM, Jan Kiszka wrote:I cannot yet follow why there is no chance for L1 to consume all vpids
Last but not least: the guest can now easily exhaust the host's pool ofI reuse the value of vpid02 while vpid12 changed w/ one invvpid in v2,
vpid by simply spawning plenty of VCPUs for L2, no? Is this acceptable
or should there be some limit?
and the scenario which you pointed out can be avoid.
that the host manages in that single, global bitmap by simply spawning a
lot of nested VCPUs for some L2. What is enforcing L1 to call nested
vmclear - apparently the only way, besides destructing nested VCPUs, to
release such vpids again?
is per-vCPU for L0 and reused while the value of vpid12 is changed w/
one invvpid during nested vmentry. The vpid12 is allocated by L1 for L2,
so it will not influence global bitmap(for vpid01 and vpid02 allocation)
even if spawn a lot of nested vCPUs.
VCPU, for nesting. That looks better. That also means all vpids for L2
will be folded on that single vpid in hardware, right? So the major
benefit comes from having separate vpids when switching between L1 and
L2, in fact.