[RFC PATCH 05/16] x86/split_lock: Use non atomic set and clear bit instructions in clear_cpufeature()
From: Fenghua Yu
Date: Sun May 27 2018 - 11:49:52 EST
x86_capability can span two cache lines depending on kernel configuration
and building environment. When #AC exception is enabled for split locked
accesses, clear_cpufeature() may generate #AC exception because of atomic
setting or clearing bits in x86_capability.
But kernel clears cpufeature only during a CPU is booting up. Therefore,
there is no racing condition when clear_cpufeature() is called and no need
to atomically clear or set bits in x86_capability.
To avoid #AC exception caused by split lock, call non atomic __set_bit()
and __clear_bit(). They are faster than atomic set_bit() and clear_bit()
as well.
Signed-off-by: Fenghua Yu <fenghua.yu@xxxxxxxxx>
---
arch/x86/kernel/cpu/cpuid-deps.c | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/arch/x86/kernel/cpu/cpuid-deps.c b/arch/x86/kernel/cpu/cpuid-deps.c
index 2c0bd38a44ab..b2c2a004c769 100644
--- a/arch/x86/kernel/cpu/cpuid-deps.c
+++ b/arch/x86/kernel/cpu/cpuid-deps.c
@@ -65,15 +65,15 @@ static const struct cpuid_dep cpuid_deps[] = {
static inline void clear_feature(struct cpuinfo_x86 *c, unsigned int feature)
{
/*
- * Note: This could use the non atomic __*_bit() variants, but the
- * rest of the cpufeature code uses atomics as well, so keep it for
- * consistency. Cleanup all of it separately.
+ * Because this code is only called during boot time and there
+ * is no need to be atomic, use non atomic __*_bit() for better
+ * performance and to avoid #AC exception for split locked access.
*/
if (!c) {
clear_cpu_cap(&boot_cpu_data, feature);
- set_bit(feature, (unsigned long *)cpu_caps_cleared);
+ __set_bit(feature, (unsigned long *)cpu_caps_cleared);
} else {
- clear_bit(feature, (unsigned long *)c->x86_capability);
+ __clear_bit(feature, (unsigned long *)c->x86_capability);
}
}
--
2.5.0