[tip: x86/cpu] x86/cpu: Align cpu_caps_cleared and cpu_caps_set to unsigned long
From: tip-bot2 for Fenghua Yu
Date: Fri Nov 15 2019 - 14:26:37 EST
The following commit has been merged into the x86/cpu branch of tip:
Commit-ID: f6a892ddd53e555362dbf64d31b47fde0f550ec4
Gitweb: https://git.kernel.org/tip/f6a892ddd53e555362dbf64d31b47fde0f550ec4
Author: Fenghua Yu <fenghua.yu@xxxxxxxxx>
AuthorDate: Mon, 16 Sep 2019 15:39:56 -07:00
Committer: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
CommitterDate: Fri, 15 Nov 2019 20:20:32 +01:00
x86/cpu: Align cpu_caps_cleared and cpu_caps_set to unsigned long
cpu_caps_cleared[] and cpu_caps_set[] are arrays of type u32 and therefore
naturally aligned to 4 bytes, which is also unsigned long aligned on
32-bit, but not on 64-bit.
The array pointer is handed into atomic bit operations. If the access not
aligned to unsigned long then the atomic bit operations can end up crossing
a cache line boundary, which causes the CPU to do a full bus lock as it
can't lock both cache lines at once. The bus lock operation is heavy weight
and can cause severe performance degradation.
The upcoming #AC split lock detection mechanism will issue warnings for
this kind of access.
Force the alignment of these arrays to unsigned long. This avoids the
massive code changes which would be required when converting the array data
type to unsigned long.
[ tglx: Rewrote changelog ]
Signed-off-by: Fenghua Yu <fenghua.yu@xxxxxxxxx>
Signed-off-by: Tony Luck <tony.luck@xxxxxxxxx>
Signed-off-by: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
Reviewed-by: Borislav Petkov <bp@xxxxxxx>
Link: https://lkml.kernel.org/r/20190916223958.27048-2-tony.luck@xxxxxxxxx
---
arch/x86/kernel/cpu/common.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 9ae7d1b..1e9430b 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -565,8 +565,9 @@ static const char *table_lookup_model(struct cpuinfo_x86 *c)
return NULL; /* Not found */
}
-__u32 cpu_caps_cleared[NCAPINTS + NBUGINTS];
-__u32 cpu_caps_set[NCAPINTS + NBUGINTS];
+/* Aligned to unsigned long to avoid split lock in atomic bitmap ops */
+__u32 cpu_caps_cleared[NCAPINTS + NBUGINTS] __aligned(sizeof(unsigned long));
+__u32 cpu_caps_set[NCAPINTS + NBUGINTS] __aligned(sizeof(unsigned long));
void load_percpu_segment(int cpu)
{