Re: [Patch v4 17/18] x86/speculation: Update SPEC_CTRL MSRs of remote CPUs
From: Thomas Gleixner
Date: Sun Nov 04 2018 - 14:49:48 EST
Tim,
On Tue, 30 Oct 2018, Tim Chen wrote:
> void arch_set_security(struct task_struct *tsk, unsigned int value)
> diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
> index 943e90d..048b7f4b 100644
> --- a/arch/x86/kernel/process.c
> +++ b/arch/x86/kernel/process.c
> @@ -426,7 +426,19 @@ static __always_inline void spec_ctrl_update_msr(unsigned long tifn)
> static __always_inline void __speculation_ctrl_update(unsigned long tifp,
> unsigned long tifn)
> {
> - bool updmsr = !!((tifp ^ tifn) & _TIF_STIBP);
> + /*
> + * If TIF_UPDATE_SPEC_CTRL bit is set in tifp, speculation related
> + * TIF flags have changed when previous task was running, but
> + * SPEC_CTRL MSR has not been synchronized with TIF flag changes.
> + * SPEC_CTRL MSR value can be out of date.
> + *
> + * Need to force update SPEC_CTRL MSR if TIF_UPDATE_SPEC_CTRL
> + * bit in tifp is set.
> + *
> + * The TIF_UPDATE_SPEC_CTRL bit in tifn was cleared before calling
> + * this function.
> + */
> + bool updmsr = !!((tifp ^ tifn) & (_TIF_STIBP|_TIF_UPDATE_SPEC_CTRL));
>
> /* If TIF_SSBD is different, select the proper mitigation method */
> if ((tifp ^ tifn) & _TIF_SSBD) {
> @@ -482,6 +494,14 @@ void __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p,
> if ((tifp ^ tifn) & _TIF_NOCPUID)
> set_cpuid_faulting(!!(tifn & _TIF_NOCPUID));
>
> + if (tifp & _TIF_UPDATE_SPEC_CTRL)
> + clear_tsk_thread_flag(prev_p, TIF_UPDATE_SPEC_CTRL);
> +
> + if (tifn & _TIF_UPDATE_SPEC_CTRL) {
> + clear_tsk_thread_flag(next_p, TIF_UPDATE_SPEC_CTRL);
> + tifn &= ~_TIF_UPDATE_SPEC_CTRL;
> + }
I'm really unhappy about adding yet more conditionals into this code
path. We really need to find some better solution for that.
There are basically two options:
1) Restrict the PRCTL control so it is only possible to modify it at the
point where the application is still single threaded.
2) Add _TIF_UPDATE_SPEC_CTRL to the SYSCALL_EXIT_WORK_FLAGS and handle it
in the slow work path.
The KVM side can be handled in x86_virt_spec_ctrl().
Thanks,
tglx