Re: [patch V2 24/28] x86/speculation: Prepare arch_smt_update() for PRCTL mode
From: Lendacky, Thomas
Date: Tue Nov 27 2018 - 15:18:53 EST
On 11/25/2018 12:33 PM, Thomas Gleixner wrote:
> The upcoming fine grained per task STIBP control needs to be updated on CPU
> hotplug as well.
>
> Split out the code which controls the strict mode so the prctl control code
> can be added later. Mark the SMP function call argument __unused while at it.
>
> Signed-off-by: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
>
> ---
>
> v1 -> v2: s/app2app/user/. Mark smp function argument __unused
>
> ---
> arch/x86/kernel/cpu/bugs.c | 46 ++++++++++++++++++++++++---------------------
> 1 file changed, 25 insertions(+), 21 deletions(-)
>
> --- a/arch/x86/kernel/cpu/bugs.c
> +++ b/arch/x86/kernel/cpu/bugs.c
> @@ -530,40 +530,44 @@ static void __init spectre_v2_select_mit
> arch_smt_update();
> }
>
> -static bool stibp_needed(void)
> +static void update_stibp_msr(void * __unused)
> {
> - /* Enhanced IBRS makes using STIBP unnecessary. */
> - if (spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED)
> - return false;
> -
> - /* Check for strict user mitigation mode */
> - return spectre_v2_user == SPECTRE_V2_USER_STRICT;
> + wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
> }
>
> -static void update_stibp_msr(void *info)
> +/* Update x86_spec_ctrl_base in case SMT state changed. */
> +static void update_stibp_strict(void)
> {
> - wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base);
> + u64 mask = x86_spec_ctrl_base & ~SPEC_CTRL_STIBP;
> +
> + if (sched_smt_active())
> + mask |= SPEC_CTRL_STIBP;
> +
> + if (mask == x86_spec_ctrl_base)
> + return;
> +
> + pr_info("Spectre v2 user space SMT mitigation: STIBP %s\n",
> + mask & SPEC_CTRL_STIBP ? "always-on" : "off");
> + x86_spec_ctrl_base = mask;
> + on_each_cpu(update_stibp_msr, NULL, 1);
Some more testing using spectre_v2_user=on and I've found that during boot
up, once the first SMT thread is encountered no more updates to MSRs for
STIBP are done for any CPUs brought up after that. The first SMT thread
will cause mask != x86_spec_ctrl_base, but then x86_spec_ctrl_base is set
to mask and the check always causes a return for subsequent CPUs that are
brought up.
Talking to our HW folks, they recommend that it be set on all threads, so
I'm not sure what the right approach would be for this.
Also, I've seen some BIOSes set up the cores/threads where the core and
its thread are enumerated before the next core and its thread, etc. If
that were the case, I think this would result in only the first core
and its thread having STIBP set, right?
Thanks,
Tom
> }
>
> void arch_smt_update(void)
> {
> - u64 mask;
> -
> - if (!stibp_needed())
> + /* Enhanced IBRS implies STIBP. No update required. */
> + if (spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED)
> return;
>
> mutex_lock(&spec_ctrl_mutex);
>
> - mask = x86_spec_ctrl_base & ~SPEC_CTRL_STIBP;
> - if (sched_smt_active())
> - mask |= SPEC_CTRL_STIBP;
> -
> - if (mask != x86_spec_ctrl_base) {
> - pr_info("Spectre v2 cross-process SMT mitigation: %s STIBP\n",
> - mask & SPEC_CTRL_STIBP ? "Enabling" : "Disabling");
> - x86_spec_ctrl_base = mask;
> - on_each_cpu(update_stibp_msr, NULL, 1);
> + switch (spectre_v2_user) {
> + case SPECTRE_V2_USER_NONE:
> + break;
> + case SPECTRE_V2_USER_STRICT:
> + update_stibp_strict();
> + break;
> }
> +
> mutex_unlock(&spec_ctrl_mutex);
> }
>
>
>