Re: [RFT PATCH] x86/hyperv: Use __naked attribute to fix stackless C function

From: Andrew Cooper

Date: Thu Feb 26 2026 - 08:28:20 EST


On 26/02/2026 1:07 pm, Ard Biesheuvel wrote:
>
> On Thu, 26 Feb 2026, at 13:01, Andrew Cooper wrote:
>>> @@ -133,49 +150,36 @@ static noinline void hv_crash_clear_kernpt(void) * available. We restore kernel GDT, and rest of the context, and continue
>>> * to kexec.
>>> */
>>> -static asmlinkage void __noreturn hv_crash_c_entry(void) +static void
>>> __naked hv_crash_c_entry(void) {
>>> - struct hv_crash_ctxt *ctxt = &hv_crash_ctxt; - /* first thing, restore kernel gdt */
>>> - native_load_gdt(&ctxt->gdtr); + asm volatile("lgdt %0" : : "m"
>>> (hv_crash_ctxt.gdtr));
>>> - asm volatile("movw %%ax, %%ss" : : "a"(ctxt->ss)); - asm
>>> volatile("movq %0, %%rsp" : : "m"(ctxt->rsp)); + asm volatile("movw
>>> %%ax, %%ss" : : "a"(hv_crash_ctxt.ss)); + asm volatile("movq %0,
>>> %%rsp" : : "m"(hv_crash_ctxt.rsp));
>> I know this is pre-existing, but the asm here is poor.
>>
>> All segment registers loads can have a memory operand, rather than
>> forcing through %eax, which in turn reduces the setup logic the compiler
>> needs to emit.
>>
>> Something like this:
>>
>>     "movl %0, %%ss" : : "m"(hv_crash_ctxt.ss)
>>
>> ought to do.
>>
> 'movw' seems to work, yes.

movw works, but is sub-optimal.

The segment register instructions are somewhat weird even by x86 standards.

They should always be written as 32-bit operations (movl, and %eax),
removing the operand size prefix which is not necessary for these
instructions to function correctly.

It's absolutely marginal, but it does always pain me to read asm like
this and see the myth of how to access segment selectors being repeated
time and time again.

~Andrew