Re: [PATCH RFC v1 08/20] KVM: VMX: Support extended register index in exit handling

From: Chang S. Bae

Date: Thu Nov 13 2025 - 18:22:49 EST


On 11/11/2025 9:45 AM, Paolo Bonzini wrote:
On 11/10/25 19:01, Chang S. Bae wrote:

-static inline struct vmx_insn_info vmx_get_insn_info(struct kvm_vcpu *vcpu __maybe_unused)
+static inline struct vmx_insn_info vmx_get_insn_info(struct kvm_vcpu *vcpu)
  {
      struct vmx_insn_info insn;
-    insn.extended  = false;
-    insn.info.word = vmcs_read32(VMX_INSTRUCTION_INFO);
+    if (vmx_egpr_enabled(vcpu)) {
+        insn.extended   = true;
+        insn.info.dword = vmcs_read64(EXTENDED_INSTRUCTION_INFO);
+    } else {
+        insn.extended  = false;
+        insn.info.word = vmcs_read32(VMX_INSTRUCTION_INFO);
+    }

Could this use static_cpu_has(X86_FEATURE_APX) instead, which is more efficient (avoids a runtime test)?

Yes, for the same reason mentioned in patch7.

@@ -415,7 +420,10 @@ static __always_inline unsigned long vmx_get_exit_qual(struct kvm_vcpu *vcpu)
  static inline int vmx_get_exit_qual_gpr(struct kvm_vcpu *vcpu)
  {
-    return (vmx_get_exit_qual(vcpu) >> 8) & 0xf;
+    if (vmx_egpr_enabled(vcpu))
+        return (vmx_get_exit_qual(vcpu) >> 8) & 0x1f;
+    else
+        return (vmx_get_exit_qual(vcpu) >> 8) & 0xf;

Can this likewise mask against 0x1f, unconditionally?

It looks like the behavior of that previously-undefined bit is not
guaranteed -- there's no architectural promise that the bit will always
read as zero. So in this case, I think it's still safer to rely on the
enumeration.

Perhaps adding a comment like this would clarify the intent:

/*
* Bit 12 was previously undefined, so its value is not guaranteed to
* be zero. Only rely on the full 5-bit with the extension.
*/
if (vmx_ext_insn_info_available())
...