Re: [PATCH v2 2/3] KVM: SEV: Add support for IBPB-on-Entry
From: Sean Christopherson
Date: Mon Mar 02 2026 - 10:11:08 EST
On Sat, Feb 28, 2026, Borislav Petkov wrote:
> Sean, ack for the KVM bits and me taking them thru tip?
Ya, should be fine for this to go through tip.
> On Tue, Feb 03, 2026 at 04:24:04PM -0600, Kim Phillips wrote:
> > AMD EPYC 5th generation and above processors support IBPB-on-Entry
> > for SNP guests. By invoking an Indirect Branch Prediction Barrier
> > (IBPB) on VMRUN, old indirect branch predictions are prevented
> > from influencing indirect branches within the guest.
> >
> > SNP guests may choose to enable IBPB-on-Entry by setting
> > SEV_FEATURES bit 21 (IbpbOnEntry).
> >
> > Host support for IBPB on Entry is indicated by CPUID
> > Fn8000_001F[IbpbOnEntry], bit 31.
> >
> > If supported, indicate support for IBPB on Entry in
> > sev_supported_vmsa_features bit 23 (IbpbOnEntry).
> >
> > For more info, refer to page 615, Section 15.36.17 "Side-Channel
> > Protection", AMD64 Architecture Programmer's Manual Volume 2: System
> > Programming Part 2, Pub. 24593 Rev. 3.42 - March 2024 (see Link).
> >
> > Link: https://bugzilla.kernel.org/attachment.cgi?id=306250
> > Signed-off-by: Kim Phillips <kim.phillips@xxxxxxx>
> > Reviewed-by: Tom Lendacky <thomas.lendacky@xxxxxxx>
> > ---
...
> > diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
> > index ea515cf41168..8a6d25db0c00 100644
> > --- a/arch/x86/kvm/svm/sev.c
> > +++ b/arch/x86/kvm/svm/sev.c
> > @@ -3165,8 +3165,15 @@ void __init sev_hardware_setup(void)
> > cpu_feature_enabled(X86_FEATURE_NO_NESTED_DATA_BP))
> > sev_supported_vmsa_features |= SVM_SEV_FEAT_DEBUG_SWAP;
> >
> > - if (sev_snp_enabled && tsc_khz && cpu_feature_enabled(X86_FEATURE_SNP_SECURE_TSC))
> > + if (!sev_snp_enabled)
> > + return;
> > + /* the following feature bit checks are SNP specific */
> > +
> > + if (tsc_khz && cpu_feature_enabled(X86_FEATURE_SNP_SECURE_TSC))
> > sev_supported_vmsa_features |= SVM_SEV_FEAT_SECURE_TSC;
> > +
> > + if (cpu_feature_enabled(X86_FEATURE_IBPB_ON_ENTRY))
> > + sev_supported_vmsa_features |= SVM_SEV_FEAT_IBPB_ON_ENTRY;
> > }
I think I'd prefer to nest the if-statement, e.g.
if (sev_snp_enabled) {
if (tsc_khz && cpu_feature_enabled(X86_FEATURE_SNP_SECURE_TSC))
sev_supported_vmsa_features |= SVM_SEV_FEAT_SECURE_TSC;
if (cpu_feature_enabled(X86_FEATURE_IBPB_ON_ENTRY))
sev_supported_vmsa_features |= SVM_SEV_FEAT_IBPB_ON_ENTRY;
}
I'm mildly concerned that'll we'll overlook the early return and unintentionally
bury common code in the SNP-section tail.
More importantly, this patch is buggy. __sev_guest_init() needs to disallow
setting SVM_SEV_FEAT_IBPB_ON_ENTRY for non-SNP guests.
As a follow-up, I also think we should advertise SVM_SEV_FEAT_SNP_ACTIVE and
allow userspace to set the flag in kvm_sev_init.flags. KVM still needs to set
the flag for backwards compatibility, but disallowing SVM_SEV_FEAT_SNP_ACTIVE
for an SNP guest is bizarre.
E.g. across 2 or 3 patches:
diff --git a/arch/x86/include/asm/svm.h b/arch/x86/include/asm/svm.h
index edde36097ddc..7db1bfce4cca 100644
--- a/arch/x86/include/asm/svm.h
+++ b/arch/x86/include/asm/svm.h
@@ -307,6 +307,10 @@ static_assert((X2AVIC_4K_MAX_PHYSICAL_ID & AVIC_PHYSICAL_MAX_INDEX_MASK) == X2AV
#define SVM_SEV_FEAT_DEBUG_SWAP BIT(5)
#define SVM_SEV_FEAT_SECURE_TSC BIT(9)
+#define SVM_SEV_FEAT_SNP_ONLY_MASK (SVM_SEV_FEAT_SNP_ACTIVE | \
+ SVM_SEV_FEAT_SECURE_TSC | \
+ SVM_SEV_FEAT_IBPB_ON_ENTRY)
+
#define VMCB_ALLOWED_SEV_FEATURES_VALID BIT_ULL(63)
struct vmcb_seg {
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 41385573629e..b2fe0fa11f90 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -500,7 +500,7 @@ static int __sev_guest_init(struct kvm *kvm, struct kvm_sev_cmd *argp,
return -EINVAL;
if (!snp_active)
- valid_vmsa_features &= ~SVM_SEV_FEAT_SECURE_TSC;
+ valid_vmsa_features &= ~SVM_SEV_FEAT_SNP_ONLY_MASK;
if (data->vmsa_features & ~valid_vmsa_features)
return -EINVAL;
@@ -3218,8 +3218,15 @@ void __init sev_hardware_setup(void)
cpu_feature_enabled(X86_FEATURE_NO_NESTED_DATA_BP))
sev_supported_vmsa_features |= SVM_SEV_FEAT_DEBUG_SWAP;
- if (sev_snp_enabled && tsc_khz && cpu_feature_enabled(X86_FEATURE_SNP_SECURE_TSC))
- sev_supported_vmsa_features |= SVM_SEV_FEAT_SECURE_TSC;
+ if (sev_snp_enabled) {
+ sev_supported_vmsa_features |= SVM_SEV_FEAT_SNP_ACTIVE;
+
+ if (tsc_khz && cpu_feature_enabled(X86_FEATURE_SNP_SECURE_TSC))
+ sev_supported_vmsa_features |= SVM_SEV_FEAT_SECURE_TSC;
+
+ if (cpu_feature_enabled(X86_FEATURE_IBPB_ON_ENTRY))
+ sev_supported_vmsa_features |= SVM_SEV_FEAT_IBPB_ON_ENTRY;
+ }
}
void sev_hardware_unsetup(void)