While I followed the discussion, it didn't become clear to me whatOn 12/14/2015 10:27 AM, Konrad Rzeszutek Wilk wrote:On 15.12.15 at 15:36, <boris.ostrovsky@xxxxxxxxxx> wrote:
On Sat, Dec 12, 2015 at 07:25:55PM -0500, Boris Ostrovsky wrote:
Using MMUEXT_TLB_FLUSH_MULTI doesn't buy us much since the hypervisorBut if the VCPU is asleep, doing it via the hypervisor will save us waking
will likely perform same IPIs as would have the guest.
up the guest VCPU, sending an IPI - just to do an TLB flush
of that CPU. Which is pointless as the CPU hadn't been running the
guest in the first place.
More importantly, using MMUEXT_INVLPG_MULTI may not to invalidate theRight, so the hypervisor won't even send an IPI there.
guest's address on remote CPU (when, for example, VCPU from another
guest
is running there).
But if you do it via the normal guest IPI mechanism (which are opaque
to the hypervisor) you and up scheduling the guest VCPU to do
send an hypervisor callback. And the callback will go the IPI routine
which will do an TLB flush. Not necessary.
This is all in case of oversubscription of course. In the case where
we are fine on vCPU resources it does not matter.
So then should we keep these two operations (MMUEXT_INVLPG_MULTI and
MMUEXT_TLB_FLUSH_MULT) available to HVM/PVH guests? If the guest's VCPU
is not running then TLBs must have been flushed.
uses these are for HVM guests considering the separate address
spaces.
As long as they're useless if called, I'd still favor making
them inaccessible.