Re: [PATCH v2 4/7] KVM: arm64: Implement kvm_arch_flush_remote_tlbs_range()

From: Oliver Upton
Date: Tue Apr 04 2023 - 15:09:34 EST


On Mon, Apr 03, 2023 at 02:23:17PM -0700, Raghavendra Rao Ananta wrote:
> On Wed, Mar 29, 2023 at 5:53 PM Oliver Upton <oliver.upton@xxxxxxxxx> wrote:
> >
> > On Mon, Feb 06, 2023 at 05:23:37PM +0000, Raghavendra Rao Ananta wrote:
> > > Implement kvm_arch_flush_remote_tlbs_range() for arm64,
> > > such that it can utilize the TLBI range based instructions
> > > if supported.
> > >
> > > Signed-off-by: Raghavendra Rao Ananta <rananta@xxxxxxxxxx>
> > > ---
> > > arch/arm64/include/asm/kvm_host.h | 3 +++
> > > arch/arm64/kvm/mmu.c | 15 +++++++++++++++
> > > 2 files changed, 18 insertions(+)
> > >
> > > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> > > index dee530d75b957..211fab0c1de74 100644
> > > --- a/arch/arm64/include/asm/kvm_host.h
> > > +++ b/arch/arm64/include/asm/kvm_host.h
> > > @@ -1002,6 +1002,9 @@ struct kvm *kvm_arch_alloc_vm(void);
> > > #define __KVM_HAVE_ARCH_FLUSH_REMOTE_TLBS
> > > int kvm_arch_flush_remote_tlbs(struct kvm *kvm);
> > >
> > > +#define __KVM_HAVE_ARCH_FLUSH_REMOTE_TLBS_RANGE
> > > +int kvm_arch_flush_remote_tlbs_range(struct kvm *kvm, gfn_t start_gfn, u64 pages);
> > > +
> > > static inline bool kvm_vm_is_protected(struct kvm *kvm)
> > > {
> > > return false;
> > > diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> > > index e98910a8d0af6..409cb187f4911 100644
> > > --- a/arch/arm64/kvm/mmu.c
> > > +++ b/arch/arm64/kvm/mmu.c
> > > @@ -91,6 +91,21 @@ int kvm_arch_flush_remote_tlbs(struct kvm *kvm)
> > > return 0;
> > > }
> > >
> > > +int kvm_arch_flush_remote_tlbs_range(struct kvm *kvm, gfn_t start_gfn, u64 pages)
> > > +{
> > > + phys_addr_t start, end;
> > > +
> > > + if (!system_supports_tlb_range())
> > > + return -EOPNOTSUPP;
> >
> > There's multiple layers of fallback throughout this series, as it would
> > appear that deep in __kvm_tlb_flush_range() you're blasting the whole
> > VMID if either the range is too large or the feature isn't supported.
> >
> > Is it possible to just normalize on a single spot to gate the use of
> > range-based invalidations? I have a slight preference for doing it deep
> > in the handler, as it keeps the upper layers of code a bit more
> > readable.
> >
> I was a little skeptical on this part, since the
> kvm_arch_flush_remote_tlbs_range() expects to return -EOPNOTSUPP if
> indeed there's no support.

Well, the arch-neutral code can expect whatever it wants :) The only
real contract we have with it is to return 0 iff the specified range has
been invalidated, even if that comes with over-invalidating.

> But I see your point. The if-else in kvm_pgtable_stage2_flush_range()
> seems redundant and I can simply manage this conditions inside
> __kvm_tlb_flush_range_vmid_ipa() itself, but I'll leave the
> kvm_arch_flush_remote_tlbs_range()'s implementation as is. Thoughts?

The largest concern I had is that the series is testing for FEAT_TLBIRANGE
all over the shop and I just raised that concern on this patch. AFAICT,
the iterative approach to invalidating a range of IPAs is effectively
dead code, as all flows into __kvm_tlb_flush_range_vmid_ipa() are gated
by system_supports_tlb_range() somewhere.

Personally, I prefer keeping the higher level software models making
aggressive use of range-based interfaces and letting the actual
implementation under the hood select the appropriate instruction. That
helps readability, as it directly communicates the expected outcome of
the invalidation.

So, if you want to make use of the iterative approach to TLB invalidations on
!TLBIRANGE systems, then this function should _not_ return EOPNOTSUPP.

--
Thanks,
Oliver