Re: [tip: x86/cleanups] x86/segment: Use MOVL when reading segment registers
From: Uros Bizjak
Date: Thu Mar 12 2026 - 05:37:26 EST
On Tue, Jan 20, 2026 at 1:38 PM H. Peter Anvin <hpa@xxxxxxxxx> wrote:
>
> On January 20, 2026 4:08:01 AM PST, tip-bot2 for Uros Bizjak <tip-bot2@xxxxxxxxxxxxx> wrote:
> >The following commit has been merged into the x86/cleanups branch of tip:
> >
> >Commit-ID: 53ed3d91a141f5c8b3bce45b0004fbbfefe77956
> >Gitweb: https://git.kernel.org/tip/53ed3d91a141f5c8b3bce45b0004fbbfefe77956
> >Author: Uros Bizjak <ubizjak@xxxxxxxxx>
> >AuthorDate: Mon, 05 Jan 2026 10:02:32 +01:00
> >Committer: Borislav Petkov (AMD) <bp@xxxxxxxxx>
> >CommitterDate: Tue, 20 Jan 2026 12:34:58 +01:00
> >
> >x86/segment: Use MOVL when reading segment registers
> >
> >Use MOVL when reading segment registers to avoid 0x66 operand-size override
> >insn prefix. The segment value is always 16-bit and gets zero-extended to the
> >full 32-bit size.
> >
> >Example:
> >
> > 4e4: 66 8c c0 mov %es,%ax
> > 4e7: 66 89 83 80 0b 00 00 mov %ax,0xb80(%rbx)
> >
> > 4e4: 8c c0 mov %es,%eax
> > 4e6: 66 89 83 80 0b 00 00 mov %ax,0xb80(%rbx)
> >
> >Also, use the %k0 modifier which generates the SImode (signed integer)
> >register name for the target register.
> >
> > [ bp: Extend and clarify commit message. ]
> >
> >Signed-off-by: Uros Bizjak <ubizjak@xxxxxxxxx>
> >Signed-off-by: Borislav Petkov (AMD) <bp@xxxxxxxxx>
> >Reviewed-by: H. Peter Anvin (Intel) <hpa@xxxxxxxxx>
> >Tested-by: Michael Kelley <mhklinux@xxxxxxxxxxx>
> >Link: https://patch.msgid.link/20260105090422.6243-1-ubizjak@xxxxxxxxx
> >---
> > arch/x86/include/asm/segment.h | 2 +-
> > 1 file changed, 1 insertion(+), 1 deletion(-)
> >
> >diff --git a/arch/x86/include/asm/segment.h b/arch/x86/include/asm/segment.h
> >index f59ae71..9f5be2b 100644
> >--- a/arch/x86/include/asm/segment.h
> >+++ b/arch/x86/include/asm/segment.h
> >@@ -348,7 +348,7 @@ static inline void __loadsegment_fs(unsigned short value)
> > * Save a segment register away:
> > */
> > #define savesegment(seg, value) \
> >- asm("mov %%" #seg ",%0":"=r" (value) : : "memory")
> >+ asm("movl %%" #seg ",%k0" : "=r" (value) : : "memory")
> >
> > #endif /* !__ASSEMBLER__ */
> > #endif /* __KERNEL__ */
> >
>
> Incidentally, why aren't we using =rm here? Segment moves support memory operands.
Resurrecting an old thread.
We would like to always use MOVL to avoid 0x66 operand size override
prefix when reading to a register, but MOVL does not support memory
operands. MOVW is required in this case.
Also note that both GCC and clang have their own share of problems
with "=rm" output asm constraints. GCC mostly prefers register output
and clang prefers memory output.
So, due to the above reasons, the compilation results in unoptimal
code, e.g: current_save_fsgs() compiles to:
32333: 8c e0 mov %fs,%eax
32335: 66 89 82 0c 0c 00 00 mov %ax,0xc0c(%rdx)
3233c: 8c e8 mov %gs,%eax
3233e: 66 89 82 0e 0c 00 00 mov %ax,0xc0e(%rdx)
To solve the above issue, I'd like to propose a specialized
__savesegment() macro that would allow optimized save to a memory
location:
#define __savesegment(seg, loc) \
do { \
BUILD_BUG_ON(sizeof(loc) != 2); \
asm volatile("movw %%" #seg ",%0" : "=m" (loc)); \
} while (0)
so one can use this variant when the segment register is really saved
to a memory. The above code improves to:
32333: 8c a2 0c 0c 00 00 mov %fs,0xc0c(%rdx)
32339: 8c aa 0e 0c 00 00 mov %gs,0xc0e(%rdx)
Unfortunately, inline asm does not support alternative instructions
that would depend on output argument type (reg vs mem), so the
selection can not be automatic. But by having specialized macro, a
developer can use the variant that results in the most optimal code.
A prototype patch is attached to the message.
Uros.
diff --git a/arch/x86/include/asm/segment.h b/arch/x86/include/asm/segment.h
index 0f4283dcd0c4..82cb692da116 100644
--- a/arch/x86/include/asm/segment.h
+++ b/arch/x86/include/asm/segment.h
@@ -347,8 +347,14 @@ static inline void __loadsegment_fs(unsigned short value)
/*
* Save a segment register away:
*/
-#define savesegment(seg, value) \
- asm volatile("movl %%" #seg ",%k0" : "=r" (value))
+#define savesegment(seg, var) \
+ asm volatile("movl %%" #seg ",%k0" : "=r" (var))
+
+#define __savesegment(seg, loc) \
+do { \
+ BUILD_BUG_ON(sizeof(loc) != 2); \
+ asm volatile("movw %%" #seg ",%0" : "=m" (loc)); \
+} while (0)
#endif /* !__ASSEMBLER__ */
#endif /* __KERNEL__ */