Re: [PATCH v4 2/2] x86/speculation: Enable cross-hyperthread spectre v2 STIBP mitigation

From: Thomas Gleixner
Date: Sat Sep 08 2018 - 02:45:43 EST


On Fri, 7 Sep 2018, Jiri Kosina wrote:

> On Fri, 7 Sep 2018, Thomas Gleixner wrote:
>
> > > + * The read-modify-write of the MSR doesn't need any race protection here,
> > > + * as we're running in atomic context.
> > > + */
> > > +static void enable_stibp(void *info)
> > > +{
> > > + u64 mask;
> > > + rdmsrl(MSR_IA32_SPEC_CTRL, mask);
> > > + mask |= SPEC_CTRL_STIBP;
> > > + wrmsrl(MSR_IA32_SPEC_CTRL, mask);
> > > +}
> >
> > You need to write x86_spec_ctrl_base
>
> Agreed (Josh pointed that out as well) -- otherwise this gets lost during
> VMEXIT at least.

It's not only virt. The firmware crap as well.

> > so again. x86_spec_ctrl_base needs the STIPB bit update when the
> > enable/disable changes.
>
> Agreed, but that's not sufficient. We need to update the MSR value on
> primary threads as well (which we do in sysfs store path), so this is
> merely an optimization so that we don't do it pointlessly twice on
> siblings.

No. It's an correctness issue. If after changing the SMT to enable a normal
hotplug operation happens then you need to update the MSR as well.

> > If you do that _before_ the loop then you spare 50% IPIs again because the
> > siblings will do the right thing via x86_spec_ctrl_base.
>
> So I will go through the whole codepath again, but I fear your suggestion
> would not work -- see the check for cpu_smt_control in stibp_needed(). We
> need to see the old (or new, depending on the direction of the transition)
> value of cpu_smt_contol, which will break if we move
> arch_smt_enable_errata() (and thus the check).

That's bogus. The arch_smt_control(enable) function does not need to look
at the SMT control state at all. The caller hands the not yet populated new
state in and that's enough to make the decision whether to set or clear the
bit in x86_spec_ctrl_base.

Thanks,

tglx