Re: [PATCH v3 2/2] arm64: kvm: Introduce MTE VCPU feature

From: Andrew Jones
Date: Fri Oct 02 2020 - 12:20:21 EST


On Fri, Oct 02, 2020 at 04:30:47PM +0100, Steven Price wrote:
> On 02/10/2020 15:30, Andrew Jones wrote:
> > On Fri, Sep 25, 2020 at 10:36:07AM +0100, Steven Price wrote:
> > > + if (system_supports_mte() && kvm->arch.mte_enabled && pfn_valid(pfn)) {
> >
> > 'system_supports_mte() && kvm->arch.mte_enabled' is redundant, but I
> > assume system_supports_mte() is there to improve the efficiency of the
> > branch, as it's using cpus_have_const_cap().
>
> system_supports_mte() compiles to 0 when MTE support isn't built in, so this
> code can be removed by the compiler,

I know. That's what I meant by "improve the efficiency of the branch"


> whereas with kvm->arch.mte_enabled I
> doubt the compiler can deduce that it is never set.
>
> > Maybe a helper like
> >
> > static inline bool kvm_arm_mte_enabled(struct kvm *kvm)
> > {
> > return system_supports_mte() && kvm->arch.mte_enabled;
> > }
> >
> > would allow both the more efficient branch and look less confusing
> > where it gets used.
>
> I wasn't sure it was worth having a helper since this was the only place
> checking this condition. It's also a bit tricky putting this in a logical
> header file, kvm_host.h doesn't work because struct kvm hasn't been defined
> by then.

OK, but I feel like we're setting ourselves up to revisit these types of
conditions again when our memories fade or when new developers see them
for the first time and ask.

Thanks,
drew

>
> Steve
>
> > > + /*
> > > + * VM will be able to see the page's tags, so we must ensure
> > > + * they have been initialised.
> > > + */
> > > + struct page *page = pfn_to_page(pfn);
> > > + long i, nr_pages = compound_nr(page);
> > > +
> > > + /* if PG_mte_tagged is set, tags have already been initialised */
> > > + for (i = 0; i < nr_pages; i++, page++) {
> > > + if (!test_and_set_bit(PG_mte_tagged, &page->flags))
> > > + mte_clear_page_tags(page_address(page));
> > > + }
> > > + }
> > > +
> > > if (writable)
> > > kvm_set_pfn_dirty(pfn);
> > > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> > > index a655f172b5ad..5010a47152b4 100644
> > > --- a/arch/arm64/kvm/sys_regs.c
> > > +++ b/arch/arm64/kvm/sys_regs.c
> > > @@ -1132,7 +1132,8 @@ static u64 read_id_reg(const struct kvm_vcpu *vcpu,
> > > val &= ~(0xfUL << ID_AA64PFR0_SVE_SHIFT);
> > > val &= ~(0xfUL << ID_AA64PFR0_AMU_SHIFT);
> > > } else if (id == SYS_ID_AA64PFR1_EL1) {
> > > - val &= ~(0xfUL << ID_AA64PFR1_MTE_SHIFT);
> > > + if (!vcpu->kvm->arch.mte_enabled)
> > > + val &= ~(0xfUL << ID_AA64PFR1_MTE_SHIFT);
> > > } else if (id == SYS_ID_AA64ISAR1_EL1 && !vcpu_has_ptrauth(vcpu)) {
> > > val &= ~((0xfUL << ID_AA64ISAR1_APA_SHIFT) |
> > > (0xfUL << ID_AA64ISAR1_API_SHIFT) |
> > > @@ -1394,6 +1395,9 @@ static bool access_mte_regs(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
> > > static unsigned int mte_visibility(const struct kvm_vcpu *vcpu,
> > > const struct sys_reg_desc *rd)
> > > {
> > > + if (vcpu->kvm->arch.mte_enabled)
> > > + return 0;
> > > +
> > > return REG_HIDDEN_USER | REG_HIDDEN_GUEST;
> > > }
> > > diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
> > > index f6d86033c4fa..87678ed82ab4 100644
> > > --- a/include/uapi/linux/kvm.h
> > > +++ b/include/uapi/linux/kvm.h
> > > @@ -1035,6 +1035,7 @@ struct kvm_ppc_resize_hpt {
> > > #define KVM_CAP_LAST_CPU 184
> > > #define KVM_CAP_SMALLER_MAXPHYADDR 185
> > > #define KVM_CAP_S390_DIAG318 186
> > > +#define KVM_CAP_ARM_MTE 188
> > > #ifdef KVM_CAP_IRQ_ROUTING
> > > --
> > > 2.20.1
> > >
> > >
> >
> > Besides the helper suggestion nit
> >
> > Reviewed-by: Andrew Jones <drjones@xxxxxxxxxx>
> >
>
>