Re: [PATCH v2 3/7] KVM: Add paravirt kvm_flush_tlb_others

From: Nikunj A Dadhania
Date: Thu Jul 05 2012 - 01:56:46 EST


On Wed, 4 Jul 2012 23:09:10 -0300, Marcelo Tosatti <mtosatti@xxxxxxxxxx> wrote:
> On Tue, Jul 03, 2012 at 01:49:49PM +0530, Nikunj A Dadhania wrote:
> > On Tue, 3 Jul 2012 04:55:35 -0300, Marcelo Tosatti <mtosatti@xxxxxxxxxx> wrote:
> > > >
> > > > if (!zero_mask)
> > > > goto again;
> > >
> > > Can you please measure increased vmentry/vmexit overhead? x86/vmexit.c
> > > of git://git.kernel.org/pub/scm/virt/kvm/kvm-unit-tests.git should
> > > help.
> > >
> > Sure will get back with the result.
> >
> > > > + /*
> > > > + * Guest might have seen us offline and would have set
> > > > + * flush_on_enter.
> > > > + */
> > > > + kvm_read_guest_cached(vcpu->kvm, ghc, vs, 2*sizeof(__u32));
> > > > + if (vs->flush_on_enter)
> > > > + kvm_x86_ops->tlb_flush(vcpu);
> > >
> > >
> > > So flush_tlb_page which was an invlpg now flushes the entire TLB. Did
> > > you take that into account?
> > >
> > When the vcpu is sleeping/pre-empted out, multiple request for flush_tlb
> > could have happened. And now when we are here, it is cleaning up all the
> > TLB.
>
> Yes, cases where there are sufficient exits transforming one TLB entry
> invalidation into full TLB invalidation should go unnoticed.
>
> > One other approach would be to queue the addresses, that brings us with
> > the question: how many request to queue? This would require us adding
> > more syncronization between guest and host for updating the area where
> > these addresses is shared.
>
> Sounds unnecessarily complicated.
>
Yes, I did give this a try earlier, but did not see much improvement
with the amount of complexity that it was bringing in.

Regards
Nikunj

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/