Re: [PATCH RESEND 2/2] x86/paravirt: Use XOR r32,r32 to clear register in pv_vcpu_is_preempted()

From: H. Peter Anvin

Date: Wed Jan 07 2026 - 04:55:18 EST


On January 5, 2026 1:39:07 AM PST, Uros Bizjak <ubizjak@xxxxxxxxx> wrote:
>x86_64 zero extends 32bit operations, so for 64bit operands,
>XOR r32,r32 is functionally equal to XOR r64,r64, but avoids
>a REX prefix byte when legacy registers are used.
>
>Signed-off-by: Uros Bizjak <ubizjak@xxxxxxxxx>
>Reviewed-by: Juergen Gross <jgross@xxxxxxxx>
>Cc: Ajay Kaher <ajay.kaher@xxxxxxxxxxxx>
>Cc: Alexey Makhalov <alexey.makhalov@xxxxxxxxxxxx>
>Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
>Cc: Ingo Molnar <mingo@xxxxxxxxxx>
>Cc: Borislav Petkov <bp@xxxxxxxxx>
>Cc: Dave Hansen <dave.hansen@xxxxxxxxxxxxxxx>
>Cc: "H. Peter Anvin" <hpa@xxxxxxxxx>
>---
> arch/x86/include/asm/paravirt.h | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
>diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
>index 4f6ec60b4cb3..59aec695ae5f 100644
>--- a/arch/x86/include/asm/paravirt.h
>+++ b/arch/x86/include/asm/paravirt.h
>@@ -577,7 +577,7 @@ static __always_inline void pv_kick(int cpu)
> static __always_inline bool pv_vcpu_is_preempted(long cpu)
> {
> return PVOP_ALT_CALLEE1(bool, lock.vcpu_is_preempted, cpu,
>- "xor %%" _ASM_AX ", %%" _ASM_AX,
>+ "xor %%eax, %%eax",
> ALT_NOT(X86_FEATURE_VCPUPREEMPT));
> }
>

Acked-by: H. Peter Anvin (Intel) <hpa@xxxxxxxxx>