[PATCH V4 17/50] x86/entry: Add C {user,kernel}_entry_swapgs_and_fence()

From: Lai Jiangshan
Date: Tue Oct 26 2021 - 10:29:09 EST


From: Lai Jiangshan <laijs@xxxxxxxxxxxxxxxxx>

The C user_entry_swapgs_and_fence() implements the ASM code:
swapgs
FENCE_SWAPGS_USER_ENTRY

It will be used in the user entry swapgs code path, doing the swapgs and
lfence to prevent a speculative swapgs when coming from kernel space.

The C kernel_entry_swapgs_and_fence() implements the ASM code:
swapgs
FENCE_SWAPGS_KERNEL_ENTRY

It will be used in the kernel entry swapgs code path to prevent the
swapgs from getting speculatively skipped when coming from user space.

Cc: Josh Poimboeuf <jpoimboe@xxxxxxxxxx>
Suggested-by: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
Signed-off-by: Lai Jiangshan <laijs@xxxxxxxxxxxxxxxxx>
---
arch/x86/entry/entry64.c | 18 ++++++++++++++++++
1 file changed, 18 insertions(+)

diff --git a/arch/x86/entry/entry64.c b/arch/x86/entry/entry64.c
index bdc9540f25d3..173cf621317c 100644
--- a/arch/x86/entry/entry64.c
+++ b/arch/x86/entry/entry64.c
@@ -49,6 +49,12 @@ static __always_inline void switch_to_kernel_cr3(void) {}
* fence_swapgs_kernel_entry is used in the kernel entry code path without
* CR3 write or with conditinal CR3 write only, to prevent the swapgs from
* getting speculatively skipped when coming from user space.
+ *
+ * user_entry_swapgs_and_fence is a wrapper of swapgs and fence for user entry
+ * code path.
+ *
+ * kernel_entry_swapgs_and_fence is a wrapper of swapgs and fence for kernel
+ * entry with user gsbase code path.
*/
static __always_inline void fence_swapgs_user_entry(void)
{
@@ -59,3 +65,15 @@ static __always_inline void fence_swapgs_kernel_entry(void)
{
alternative("", "lfence", X86_FEATURE_FENCE_SWAPGS_KERNEL);
}
+
+static __always_inline void user_entry_swapgs_and_fence(void)
+{
+ native_swapgs();
+ fence_swapgs_user_entry();
+}
+
+static __always_inline void kernel_entry_swapgs_and_fence(void)
+{
+ native_swapgs();
+ fence_swapgs_kernel_entry();
+}
--
2.19.1.6.gb485710b