On Fri, Jul 12, 2019 at 04:29:06PM +0800, Tao Xu wrote:
UMWAIT and TPAUSE instructions use IA32_UMWAIT_CONTROL at MSR index E1H[...]
to determines the maximum time in TSC-quanta that the processor can reside
in either C0.1 or C0.2.
This patch emulates MSR IA32_UMWAIT_CONTROL in guest and differentiate
IA32_UMWAIT_CONTROL between host and guest. The variable
mwait_control_cached in arch/x86/power/umwait.c caches the MSR value, so
this patch uses it to avoid frequently rdmsr of IA32_UMWAIT_CONTROL.
Co-developed-by: Jingqi Liu <jingqi.liu@xxxxxxxxx>
Signed-off-by: Jingqi Liu <jingqi.liu@xxxxxxxxx>
Signed-off-by: Tao Xu <tao3.xu@xxxxxxxxx>
---
+static void atomic_switch_umwait_control_msr(struct vcpu_vmx *vmx)
+{
+ if (!vmx_has_waitpkg(vmx))
+ return;
+
+ if (vmx->msr_ia32_umwait_control != umwait_control_cached)
+ add_atomic_switch_msr(vmx, MSR_IA32_UMWAIT_CONTROL,
+ vmx->msr_ia32_umwait_control,
+ umwait_control_cached, false);
How exactly do we ensure NR_AUTOLOAD_MSRS (8) is still large enough?
I see 3 existing add_atomic_switch_msr() calls, but the one at
atomic_switch_perf_msrs() is in a loop. Are we absolutely sure
that perf_guest_get_msrs() will never return more than 5 MSRs?
+ else[...]
+ clear_atomic_switch_msr(vmx, MSR_IA32_UMWAIT_CONTROL);
+}
+
static void vmx_arm_hv_timer(struct vcpu_vmx *vmx, u32 val)
{
vmcs_write32(VMX_PREEMPTION_TIMER_VALUE, val);