Re: [PATCH v4 13/20] KVM:VMX: Emulate read and write to CET MSRs

From: Yang, Weijiang
Date: Thu Jul 27 2023 - 03:23:10 EST



On 7/27/2023 1:16 PM, Chao Gao wrote:
+ case MSR_IA32_S_CET:
+ case MSR_KVM_GUEST_SSP:
+ case MSR_IA32_INT_SSP_TAB:
+ if (kvm_get_msr_common(vcpu, msr_info))
+ return 1;
+ if (msr_info->index == MSR_KVM_GUEST_SSP)
+ msr_info->data = vmcs_readl(GUEST_SSP);
+ else if (msr_info->index == MSR_IA32_S_CET)
+ msr_info->data = vmcs_readl(GUEST_S_CET);
+ else if (msr_info->index == MSR_IA32_INT_SSP_TAB)
+ msr_info->data = vmcs_readl(GUEST_INTR_SSP_TABLE);
+ break;
case MSR_IA32_DEBUGCTLMSR:
msr_info->data = vmcs_read64(GUEST_IA32_DEBUGCTL);
break;
@@ -2402,6 +2417,31 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
else
vmx->pt_desc.guest.addr_a[index / 2] = data;
break;
+#define VMX_CET_CONTROL_MASK (~GENMASK_ULL(9, 6))
bits9-6 are reserved for both intel and amd. Shouldn't this check be
done in the common code?
My thinking is, on AMD platform, bit 63:2 is anyway reserved since it doesn't
support IBT,
You can only say

bits 5:2 and bits 63:10 are reserved since AMD doens't support IBT.

bits 9:6 are reserved regardless of the support of IBT.

so the checks in common code for AMD is enough, when the execution flow comes
here,

it should be vmx, and need this additional check.
The checks against reserved bits are common for AMD and Intel:

1. if SHSTK is supported, bit1:0 are not reserved.
2. if IBT is supported, bit5:2 and bit63:10 are not reserved
3. bit9:6 are always reserved.

There is nothing specific to Intel.

So you want the code to be:

+#define CET_IBT_MASK_BITS          (GENMASK_ULL(5, 2) | GENMASK_ULL(63, 10))

+#define CET_CTRL_RESERVED_BITS GENMASK(9, 6)

+#define CET_SHSTK_MASK_BITSGENMASK(1, 0)

+if ((!guest_can_use(vcpu, X86_FEATURE_SHSTK) &&

+(data & CET_SHSTK_MASK_BITS)) ||

+(!guest_can_use(vcpu, X86_FEATURE_IBT) &&

+(data & CET_IBT_MASK_BITS)) ||

                            (data & CET_CTRL_RESERVED_BITS) )

^^^^^^^^^^^^^^^^^^^^^^^^^

+return 1;


+#define CET_LEG_BITMAP_BASE(data) ((data) >> 12)
+#define CET_EXCLUSIVE_BITS (CET_SUPPRESS | CET_WAIT_ENDBR)
+ case MSR_IA32_PL0_SSP ... MSR_IA32_PL3_SSP:
+ return kvm_set_msr_common(vcpu, msr_info);
this hunk can be dropped as well.
In patch 16, these lines still need to be added back for PL{0,1,2}_SSP, so
would like keep it
If that's the case, better to move it to patch 16, where the change
can be justified. And PL3_SSP should be removed anyway. and then
"msr_index != MSR_IA32_PL3_SSP" check in the below code snippet in
patch 16 can go away.

Sure, will do it.


+ /*
+ * Write to the base SSP MSRs should happen ahead of toggling
+ * of IA32_S_CET.SH_STK_EN bit.
+ */
+ if (msr_index != MSR_IA32_PL3_SSP && data) {
+ vmx_disable_write_intercept_sss_msr(vcpu);
+ wrmsrl(msr_index, data);
+ }


here.

+ break;
+ case MSR_IA32_U_CET:
+ case MSR_IA32_S_CET:
+ case MSR_KVM_GUEST_SSP:
+ case MSR_IA32_INT_SSP_TAB:
+ if ((msr_index == MSR_IA32_U_CET ||
+ msr_index == MSR_IA32_S_CET) &&
+ ((data & ~VMX_CET_CONTROL_MASK) ||
+ !IS_ALIGNED(CET_LEG_BITMAP_BASE(data), 4) ||
+ (data & CET_EXCLUSIVE_BITS) == CET_EXCLUSIVE_BITS))
+ return 1;
how about

case MSR_IA32_U_CET:
case MSR_IA32_S_CET:
if ((data & ~VMX_CET_CONTROL_MASK) || ...
...

case MSR_KVM_GUEST_SSP:
case MSR_IA32_INT_SSP_TAB:
Do you mean to use "fallthrough"?
Yes.

OK, will change it, thanks!