Re: [PATCH v8 10/15] x86/split_lock: Handle #AC exception for split lock
From: Ingo Molnar
Date: Thu Apr 25 2019 - 02:07:24 EST
* Fenghua Yu <fenghua.yu@xxxxxxxxx> wrote:
> There may be different considerations on how to handle #AC for split lock,
> e.g. how to handle system hang caused by split lock issue in firmware,
> how to emulate faulting instruction, etc. We use a simple method to
> handle user and kernel split lock and may extend the method in the future.
>
> When #AC exception for split lock is triggered from user process, the
> process is killed by SIGBUS. To execute the process properly, a user
> application developer needs to fix the split lock issue.
>
> When #AC exception for split lock is triggered from a kernel instruction,
> disable split lock detection on local CPU and warn the split lock issue.
> After the exception, the faulting instruction will be executed and kernel
> execution continues. Split lock detection is only disabled on the local
> CPU, not globally. It will be re-enabled if the CPU is offline and then
> online or through sysfs interface.
>
> A kernel/driver developer should check the warning, which contains helpful
> faulting address, context, and callstack info, and fix the split lock
> issues. Then further split lock issues may be captured and fixed.
>
> After bit 29 in MSR_TEST_CTL is set to 1 in kernel, firmware inherits
> the setting when firmware is executed in S4, S5, run time services, SMI,
> etc. If there is a split lock operation in firmware, it will triggers
> #AC and may hang the system depending on how firmware handles the #AC.
> It's up to a firmware developer to fix split lock issues in firmware.
>
> MSR TEST_CTL value is cached in per CPU msr_test_ctl_cache which will be
> used in virtualization to avoid costly MSR read.
>
> Signed-off-by: Fenghua Yu <fenghua.yu@xxxxxxxxx>
> ---
> arch/x86/include/asm/cpu.h | 3 +++
> arch/x86/kernel/cpu/intel.c | 24 ++++++++++++++++++++++++
> arch/x86/kernel/traps.c | 31 ++++++++++++++++++++++++++++++-
> 3 files changed, 57 insertions(+), 1 deletion(-)
>
> diff --git a/arch/x86/include/asm/cpu.h b/arch/x86/include/asm/cpu.h
> index 4e03f53fc079..5706461eb60f 100644
> --- a/arch/x86/include/asm/cpu.h
> +++ b/arch/x86/include/asm/cpu.h
> @@ -42,7 +42,10 @@ unsigned int x86_model(unsigned int sig);
> unsigned int x86_stepping(unsigned int sig);
> #ifdef CONFIG_CPU_SUP_INTEL
> void __init cpu_set_core_cap_bits(struct cpuinfo_x86 *c);
> +DECLARE_PER_CPU(u64, msr_test_ctl_cache);
> +void handle_split_lock_kernel_mode(void);
> #else
> static inline void __init cpu_set_core_cap_bits(struct cpuinfo_x86 *c) {}
> +static inline void handle_split_lock_kernel_mode(void) {}
> #endif
> #endif /* _ASM_X86_CPU_H */
> diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c
> index d7e676c2aebf..2cc69217ca7c 100644
> --- a/arch/x86/kernel/cpu/intel.c
> +++ b/arch/x86/kernel/cpu/intel.c
> @@ -31,6 +31,9 @@
> #include <asm/apic.h>
> #endif
>
> +DEFINE_PER_CPU(u64, msr_test_ctl_cache);
> +EXPORT_PER_CPU_SYMBOL_GPL(msr_test_ctl_cache);
> +
> /*
> * Just in case our CPU detection goes bad, or you have a weird system,
> * allow a way to override the automatic disabling of MPX.
> @@ -654,6 +657,17 @@ static void init_intel_misc_features(struct cpuinfo_x86 *c)
> wrmsrl(MSR_MISC_FEATURES_ENABLES, msr);
> }
>
> +static void init_split_lock_detect(struct cpuinfo_x86 *c)
> +{
> + if (cpu_has(c, X86_FEATURE_SPLIT_LOCK_DETECT)) {
> + u64 test_ctl_val;
> +
> + /* Cache MSR TEST_CTL */
> + rdmsrl(MSR_TEST_CTL, test_ctl_val);
> + this_cpu_write(msr_test_ctl_cache, test_ctl_val);
> + }
> +}
> +
> static void init_intel(struct cpuinfo_x86 *c)
> {
> early_init_intel(c);
> @@ -766,6 +780,8 @@ static void init_intel(struct cpuinfo_x86 *c)
> init_intel_energy_perf(c);
>
> init_intel_misc_features(c);
> +
> + init_split_lock_detect(c);
> }
>
> #ifdef CONFIG_X86_32
> @@ -1060,3 +1076,11 @@ void __init cpu_set_core_cap_bits(struct cpuinfo_x86 *c)
> if (ia32_core_cap & CORE_CAP_SPLIT_LOCK_DETECT)
> set_split_lock_detect();
> }
> +
> +void handle_split_lock_kernel_mode(void)
> +{
> + /* Warn and disable split lock detection on this CPU */
> + msr_clear_bit(MSR_TEST_CTL, TEST_CTL_SPLIT_LOCK_DETECT_SHIFT);
> + this_cpu_and(msr_test_ctl_cache, ~TEST_CTL_SPLIT_LOCK_DETECT);
> + WARN_ONCE(1, "split lock operation detected\n");
Please name this more descriptively, such as x86_split_lock_disable() or
so.
Also, please reorganize the split lock detection namespace to be less
idiosynchratic, use a common x86_split_lock_ prefix and organize the new
namespace around that:
x86_split_lock_init() // was: init_split_lock_detect
x86_split_lock_enable() // was: set_split_lock_detect
x86_split_lock_disable() // was: handle_split_lock_kernel_mode
etc.
> +dotraplinkage void do_alignment_check(struct pt_regs *regs, long error_code)
> +{
> + unsigned int trapnr = X86_TRAP_AC;
> + char str[] = "alignment check";
> + int signr = SIGBUS;
> +
> + RCU_LOCKDEP_WARN(!rcu_is_watching(), "entry code didn't wake RCU");
> +
> + if (notify_die(DIE_TRAP, str, regs, error_code, trapnr, signr) ==
> + NOTIFY_STOP)
Please do not break lines mid-line when it does absolutely nothing to
improve readablity. Just ignore checkpatch.
> + cond_local_irq_enable(regs);
> + if (!user_mode(regs) &&
> + static_cpu_has(X86_FEATURE_SPLIT_LOCK_DETECT)) {
Ditto - in fact this doesn't even violate the col80 rule ...
> + /*
> + * Only split lock can generate #AC from kernel at this point.
> + * Warn and disable split lock detection on this CPU. The
> + * faulting instruction will be executed without generating
> + * another #AC fault.
> + */
How about explaining all this more clearly:
> + /*
> + * Only split locks can generate #AC from kernel mode.
> + *
> + * The split-lock detection feature is a one-shot
> + * debugging facility, so we disable it immediately and
> + * print a warning.
> + *
> + * This also solves the instruction restart problem: we
> + * return the faulting instruction right after this it
> + * will be executed without generating another #AC fault
> + * and getting into an infinite loop, instead it will
> + * continue without side effects to the interrupted
> + * execution conext.
> + *
> + * Split-lock detection will remain disabled permanently
> + * after this, until the next reboot.
> + */
?
Also, AFAICS this code will disable split-lock detection only on the
current CPU - all the other 4,096 CPUs hitting this same lock at the
exact same time will happily continue spamming the kernel log as they
encounter the same split lock, correct?
While the warning itself uses WARN_ONCE(), that and the underlying
BUGFLAG_ONCE mechanism is not an atomic facility.
Instead, please add an explicit, global split_lock_debug bit that the
first CPU hitting it disables, and only that CPU is allowed to print a
single warning. All other CPUs just disable split-lock debugging silently
and continue.
This also solves the race if the split-lock #AC fault is re-triggered by
NMI of perf context interrupting one split-lock warning execution while
the original WARN_ON() is executing.
Thanks,
Ingo