[PATCH V2 11/41] x86/entry: Add C user_entry_swapgs_and_fence() and kernel_entry_fence_no_swapgs()

From: Lai Jiangshan
Date: Sun Sep 26 2021 - 11:10:05 EST


From: Lai Jiangshan <laijs@xxxxxxxxxxxxxxxxx>

The C user_entry_swapgs_and_fence() implements the ASM code:
swapgs
FENCE_SWAPGS_USER_ENTRY

It will be used in the user entry swapgs code path, doing the swapgs and
lfence to prevent a speculative swapgs when coming from kernel space.

The C kernel_entry_fence_no_swapgs() implements the ASM code:
FENCE_SWAPGS_KERNEL_ENTRY

It will be used in the kernel entry non-swapgs code path to prevent the
swapgs from getting speculatively skipped when coming from user space.

Cc: Josh Poimboeuf <jpoimboe@xxxxxxxxxx>
Signed-off-by: Lai Jiangshan <laijs@xxxxxxxxxxxxxxxxx>
---
arch/x86/entry/entry64.c | 21 +++++++++++++++++++++
1 file changed, 21 insertions(+)

diff --git a/arch/x86/entry/entry64.c b/arch/x86/entry/entry64.c
index 77838e19f1ac..dafae60d31f9 100644
--- a/arch/x86/entry/entry64.c
+++ b/arch/x86/entry/entry64.c
@@ -35,3 +35,24 @@ static __always_inline void switch_to_kernel_cr3(void)
#else
static __always_inline void switch_to_kernel_cr3(void) {}
#endif
+
+/*
+ * Mitigate Spectre v1 for conditional swapgs code paths.
+ *
+ * user_entry_swapgs_and_fence is used in the user entry swapgs code path,
+ * to prevent a speculative swapgs when coming from kernel space.
+ *
+ * kernel_entry_fence_no_swapgs is used in the kernel entry non-swapgs code
+ * path, to prevent the swapgs from getting speculatively skipped when coming
+ * from user space.
+ */
+static __always_inline void user_entry_swapgs_and_fence(void)
+{
+ native_swapgs();
+ alternative("", "lfence", X86_FEATURE_FENCE_SWAPGS_USER);
+}
+
+static __always_inline void kernel_entry_fence_no_swapgs(void)
+{
+ alternative("", "lfence", X86_FEATURE_FENCE_SWAPGS_KERNEL);
+}
--
2.19.1.6.gb485710b