[Patch v4 07/18] x86/smt: Convert cpu_smt_control check to cpu_smt_enabled static key

From: Tim Chen
Date: Tue Oct 30 2018 - 15:23:47 EST


Change the SMT code paths check from using cpu_smt_control to
cpu_smt_enabled static key. This saves a branching check.

Signed-off-by: Tim Chen <tim.c.chen@xxxxxxxxxxxxxxx>
---
arch/x86/kernel/cpu/bugs.c | 2 +-
arch/x86/kvm/vmx.c | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
index eb07ab6..32d962e 100644
--- a/arch/x86/kernel/cpu/bugs.c
+++ b/arch/x86/kernel/cpu/bugs.c
@@ -355,7 +355,7 @@ void arch_smt_update(void)

mutex_lock(&spec_ctrl_mutex);
mask = x86_spec_ctrl_base;
- if (cpu_smt_control == CPU_SMT_ENABLED)
+ if (static_branch_likely(&cpu_smt_enabled))
mask |= SPEC_CTRL_STIBP;
else
mask &= ~SPEC_CTRL_STIBP;
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 533a327..8ec0ea3 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -11072,7 +11072,7 @@ static int vmx_vm_init(struct kvm *kvm)
* Warn upon starting the first VM in a potentially
* insecure environment.
*/
- if (cpu_smt_control == CPU_SMT_ENABLED)
+ if (static_branch_likely(&cpu_smt_enabled))
pr_warn_once(L1TF_MSG_SMT);
if (l1tf_vmx_mitigation == VMENTER_L1D_FLUSH_NEVER)
pr_warn_once(L1TF_MSG_L1D);
--
2.9.4