Re: [PATCH 25/24] x86/traps: Rewrite native_load_gs_index in C code

From: Lai Jiangshan
Date: Mon Sep 13 2021 - 22:04:53 EST




On 2021/9/14 04:01, Andy Lutomirski wrote:


On Thu, Sep 2, 2021, at 3:50 AM, Lai Jiangshan wrote:
From: Lai Jiangshan <laijs@xxxxxxxxxxxxxxxxx>

There is no constrain/limition to force native_load_gs_index() to be in
ASM code.

Signed-off-by: Lai Jiangshan <laijs@xxxxxxxxxxxxxxxxx>

#ifdef CONFIG_X86_64
+
+/*
+ * Reload gs selector with exception handling
+ * selector: new selector
+ *
+ * Is noinstr as it shouldn't be instrumented.
+ */
+noinstr void native_load_gs_index(unsigned int selector)
+{
+ unsigned long flags;
+
+ local_irq_save(flags);

This patch would be a bit less alarming if you moved the swapgs into asm.

Emmm, this patch is not so clean and persuadable in C.

I think Peter is still working on reworking the patchset and may be
including improving this patch. I'm Okay if this patch is dropped.

Also, this needs a comment explaining why skipping the swapgs back to kernel gs in the exception path is correct.


I think it is all known that the exception handler in ASM_EXTABLE is running in kernel context where kernel gs is active.

It does need a comment explaining why the label asm_load_gs_index_gs_change is needed, how does it help the error_entry() restores back to the kernel gs.

Since the C-version error_entry() has to check asm_load_gs_index_gs_change, I think other way to handle the fault of "mov %gs" is just doing the %gs fixup in the C-version error_entry(). (see patch 11). it would be more directly, simple, and self-documented.

Thank you for reviewing.

+ native_swapgs();
+ asm volatile(
+ ".global asm_load_gs_index_gs_change \n"
+ "asm_load_gs_index_gs_change: \n"
+ "1: movl %0, %%gs \n"
+ " swapgs \n"
+ "2: \n"
+ _ASM_EXTABLE_HANDLE(1b, 2b, ex_handler_clear_gs)
+ :: "r" (selector) : "memory");
+ alternative("", "mfence", X86_BUG_SWAPGS_FENCE);
+ local_irq_restore(flags);
+}
+