Re: [PATCH v7 7/10] KVM: arm/arm64: context-switch ptrauth registers

From: Julien Thierry
Date: Thu Mar 21 2019 - 04:30:07 EST




On 21/03/2019 06:08, Amit Daniel Kachhap wrote:
> Hi Julien,
>
> On 3/20/19 5:43 PM, Julien Thierry wrote:
>> Hi Amit,
>>
>> On 19/03/2019 08:30, Amit Daniel Kachhap wrote:
>>> From: Mark Rutland <mark.rutland@xxxxxxx>
>>>
>>> When pointer authentication is supported, a guest may wish to use it.
>>> This patch adds the necessary KVM infrastructure for this to work, with
>>> a semi-lazy context switch of the pointer auth state.
>>>
>>> Pointer authentication feature is only enabled when VHE is built
>>> in the kernel and present in the CPU implementation so only VHE code
>>> paths are modified.
>>>
>>> When we schedule a vcpu, we disable guest usage of pointer
>>> authentication instructions and accesses to the keys. While these are
>>> disabled, we avoid context-switching the keys. When we trap the guest
>>> trying to use pointer authentication functionality, we change to eagerly
>>> context-switching the keys, and enable the feature. The next time the
>>> vcpu is scheduled out/in, we start again. However the host key save is
>>> optimized and implemented inside ptrauth instruction/register access
>>> trap.
>>>
>>> Pointer authentication consists of address authentication and generic
>>> authentication, and CPUs in a system might have varied support for
>>> either. Where support for either feature is not uniform, it is hidden
>>> from guests via ID register emulation, as a result of the cpufeature
>>> framework in the host.
>>>
>>> Unfortunately, address authentication and generic authentication cannot
>>> be trapped separately, as the architecture provides a single EL2 trap
>>> covering both. If we wish to expose one without the other, we cannot
>>> prevent a (badly-written) guest from intermittently using a feature
>>> which is not uniformly supported (when scheduled on a physical CPU which
>>> supports the relevant feature). Hence, this patch expects both type of
>>> authentication to be present in a cpu.
>>>
>>> This switch of key is done from guest enter/exit assembly as preperation
>>> for the upcoming in-kernel pointer authentication support. Hence, these
>>> key switching routines are not implemented in C code as they may cause
>>> pointer authentication key signing error in some situations.
>>>
>>> Signed-off-by: Mark Rutland <mark.rutland@xxxxxxx>
>>> [Only VHE, key switch in full assembly, vcpu_has_ptrauth checks
>>> , save host key in ptrauth exception trap]
>>> Signed-off-by: Amit Daniel Kachhap <amit.kachhap@xxxxxxx>
>>> Reviewed-by: Julien Thierry <julien.thierry@xxxxxxx>
>>> Cc: Marc Zyngier <marc.zyngier@xxxxxxx>
>>> Cc: Christoffer Dall <christoffer.dall@xxxxxxx>
>>> Cc: kvmarm@xxxxxxxxxxxxxxxxxxxxx
>>> ---
>>> Â arch/arm64/include/asm/kvm_host.hÂÂÂÂÂÂÂ |Â 17 ++++++
>>> Â arch/arm64/include/asm/kvm_ptrauth_asm.h | 100
>>> +++++++++++++++++++++++++++++++
>>> Â arch/arm64/kernel/asm-offsets.cÂÂÂÂÂÂÂÂÂ |ÂÂ 6 ++
>>> Â arch/arm64/kvm/guest.cÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ |Â 14 +++++
>>> Â arch/arm64/kvm/handle_exit.cÂÂÂÂÂÂÂÂÂÂÂÂ |Â 24 +++++---
>>> Â arch/arm64/kvm/hyp/entry.SÂÂÂÂÂÂÂÂÂÂÂÂÂÂ |ÂÂ 7 +++
>>> Â arch/arm64/kvm/reset.cÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ |ÂÂ 7 +++
>>> Â arch/arm64/kvm/sys_regs.cÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ |Â 46 +++++++++++++-
>>> Â virt/kvm/arm/arm.cÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ |ÂÂ 2 +
>>> Â 9 files changed, 212 insertions(+), 11 deletions(-)
>>> Â create mode 100644 arch/arm64/include/asm/kvm_ptrauth_asm.h
>>>
> [...]
>>> +#ifdefÂÂÂ CONFIG_ARM64_PTR_AUTH
>>> +
>>> +#define PTRAUTH_REG_OFFSET(x)ÂÂÂ (x - CPU_APIAKEYLO_EL1)
>>
>> I don't really see the point of this macro. You move the pointers of
>> kvm_cpu_contexts to point to where the ptr auth registers are (which is
>> in the middle of an array) by adding the offset of APIAKEYLO and then we
>> have to recompute all offsets with this macro.
>>
>> Why not just pass the kvm_cpu_context pointers to
>> ptrauth_save/restore_state and use the already defined offsets
>> (CPU_AP*_EL1) directly?
>>
>> I think this would also allow to use one less register for the
>> ptrauth_switch_to_* macros.
> Actually the values of CPU_AP*_EL1 are exceeding the immediate range
> (i.e 512), so this was done to keep the immediate offset within the range.
> The other way would have been to calculate the destination register but
> these would add one more add instruction everywhere.
> I should have mentioned them as comments somewhere.

Oh, I see. Yes, it would definitely be worth a comment.

Thanks,

--
Julien Thierry