On Sat, Dec 12, 2015 at 07:25:55PM -0500, Boris Ostrovsky wrote:
Using MMUEXT_TLB_FLUSH_MULTI doesn't buy us much since the hypervisorBut if the VCPU is asleep, doing it via the hypervisor will save us waking
will likely perform same IPIs as would have the guest.
up the guest VCPU, sending an IPI - just to do an TLB flush
of that CPU. Which is pointless as the CPU hadn't been running the
guest in the first place.
More importantly, using MMUEXT_INVLPG_MULTI may not to invalidate theRight, so the hypervisor won't even send an IPI there.
guest's address on remote CPU (when, for example, VCPU from another
guest
is running there).
But if you do it via the normal guest IPI mechanism (which are opaque
to the hypervisor) you and up scheduling the guest VCPU to do
send an hypervisor callback. And the callback will go the IPI routine
which will do an TLB flush. Not necessary.
This is all in case of oversubscription of course. In the case where
we are fine on vCPU resources it does not matter.
Perhaps if we have PV aware TLB flush it could do this differently?
Signed-off-by: Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx>
Suggested-by: Jan Beulich <jbeulich@xxxxxxxx>
Cc: stable@xxxxxxxxxxxxxxx # 3.14+
---
arch/x86/xen/mmu.c | 9 ++-------
1 files changed, 2 insertions(+), 7 deletions(-)
diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index 9c479fe..9ed7eed 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -2495,14 +2495,9 @@ void __init xen_init_mmu_ops(void)
{
x86_init.paging.pagetable_init = xen_pagetable_init;
- /* Optimization - we can use the HVM one but it has no idea which
- * VCPUs are descheduled - which means that it will needlessly IPI
- * them. Xen knows so let it do the job.
- */
- if (xen_feature(XENFEAT_auto_translated_physmap)) {
- pv_mmu_ops.flush_tlb_others = xen_flush_tlb_others;
+ if (xen_feature(XENFEAT_auto_translated_physmap))
return;
- }
+
pv_mmu_ops = xen_mmu_ops;
memset(dummy_mapping, 0xff, PAGE_SIZE);
--
1.7.1