Re: [PATCH v2 1/1] KVM: x86: fix MSR_IA32_TSC read for nested migration
From: Paolo Bonzini
Date: Thu Sep 24 2020 - 13:33:21 EST
On 21/09/20 12:38, Maxim Levitsky wrote:
> MSR reads/writes should always access the L1 state, since the (nested)
> hypervisor should intercept all the msrs it wants to adjust, and these
> that it doesn't should be read by the guest as if the host had read it.
>
> However IA32_TSC is an exception. Even when not intercepted, guest still
> reads the value + TSC offset.
> The write however does not take any TSC offset into account.
>
> This is documented in Intel's SDM and seems also to happen on AMD as well.
>
> This creates a problem when userspace wants to read the IA32_TSC value and then
> write it. (e.g for migration)
>
> In this case it reads L2 value but write is interpreted as an L1 value.
> To fix this make the userspace initiated reads of IA32_TSC return L1 value
> as well.
>
> Huge thanks to Dave Gilbert for helping me understand this very confusing
> semantic of MSR writes.
>
> Signed-off-by: Maxim Levitsky <mlevitsk@xxxxxxxxxx>
> ---
> arch/x86/kvm/x86.c | 16 ++++++++++++++--
> 1 file changed, 14 insertions(+), 2 deletions(-)
>
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 17f4995e80a7e..ed4314641360e 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -3219,9 +3219,21 @@ int kvm_get_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
> case MSR_IA32_POWER_CTL:
> msr_info->data = vcpu->arch.msr_ia32_power_ctl;
> break;
> - case MSR_IA32_TSC:
> - msr_info->data = kvm_scale_tsc(vcpu, rdtsc()) + vcpu->arch.tsc_offset;
> + case MSR_IA32_TSC: {
> + /*
> + * Intel SDM states that MSR_IA32_TSC read adds the TSC offset
> + * even when not intercepted. AMD manual doesn't explicitly
> + * state this but appears to behave the same.
> + *
> + * However when userspace wants to read this MSR, we should
> + * return it's real L1 value so that its restore will be correct.
> + */
> + u64 tsc_offset = msr_info->host_initiated ? vcpu->arch.l1_tsc_offset :
> + vcpu->arch.tsc_offset;
> +
> + msr_info->data = kvm_scale_tsc(vcpu, rdtsc()) + tsc_offset;
> break;
> + }
> case MSR_MTRRcap:
> case 0x200 ... 0x2ff:
> return kvm_mtrr_get_msr(vcpu, msr_info->index, &msr_info->data);
>
Applied the patch as it is doing the sanest possible thing for the
current semantics of host-initiated accesses.
Paolo