[PATCH V6 16/49] x86/entry: Add C user_entry_swapgs_and_fence()

From: Lai Jiangshan
Date: Fri Nov 26 2021 - 05:24:28 EST


From: Lai Jiangshan <laijs@xxxxxxxxxxxxxxxxx>

The C user_entry_swapgs_and_fence() implements the ASM code:
swapgs
FENCE_SWAPGS_USER_ENTRY

It will be used in the user entry swapgs code path, doing the swapgs and
lfence to prevent a speculative swapgs when coming from kernel space.

Cc: Josh Poimboeuf <jpoimboe@xxxxxxxxxx>
Suggested-by: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
Signed-off-by: Lai Jiangshan <laijs@xxxxxxxxxxxxxxxxx>
---
arch/x86/entry/entry64.c | 9 +++++++++
1 file changed, 9 insertions(+)

diff --git a/arch/x86/entry/entry64.c b/arch/x86/entry/entry64.c
index bdc9540f25d3..3db503ea0703 100644
--- a/arch/x86/entry/entry64.c
+++ b/arch/x86/entry/entry64.c
@@ -49,6 +49,9 @@ static __always_inline void switch_to_kernel_cr3(void) {}
* fence_swapgs_kernel_entry is used in the kernel entry code path without
* CR3 write or with conditinal CR3 write only, to prevent the swapgs from
* getting speculatively skipped when coming from user space.
+ *
+ * user_entry_swapgs_and_fence is a wrapper of swapgs and fence for user entry
+ * code path.
*/
static __always_inline void fence_swapgs_user_entry(void)
{
@@ -59,3 +62,9 @@ static __always_inline void fence_swapgs_kernel_entry(void)
{
alternative("", "lfence", X86_FEATURE_FENCE_SWAPGS_KERNEL);
}
+
+static __always_inline void user_entry_swapgs_and_fence(void)
+{
+ native_swapgs();
+ fence_swapgs_user_entry();
+}
--
2.19.1.6.gb485710b