Re: [PATCH 1/2] KVM: x86: Use asm_inline() instead of asm() in kvm_hypercall[0-4]()
From: Sean Christopherson
Date: Mon Apr 14 2025 - 21:05:07 EST
Nit, this is guest code, i.e. should use "kvm/x86:" for the scope. No need to
send a new version just for that.
On Mon, Apr 14, 2025, Uros Bizjak wrote:
> Use asm_inline() to instruct the compiler that the size of asm()
> is the minimum size of one instruction, ignoring how many instructions
> the compiler thinks it is. ALTERNATIVE macro that expands to several
> pseudo directives causes instruction length estimate to count
> more than 20 instructions.
>
> bloat-o-meter reports minimal code size increase
> (x86_64 defconfig, gcc-14.2.1):
>
> add/remove: 0/0 grow/shrink: 1/0 up/down: 10/0 (10)
>
> Function old new delta
> -----------------------------------------------------
> __send_ipi_mask 525 535 +10
>
> Total: Before=23751224, After=23751234, chg +0.00%
>
> due to different compiler decisions with more precise size
> estimations.
>
> No functional change intended.
>
> Signed-off-by: Uros Bizjak <ubizjak@xxxxxxxxx>
> Cc: Sean Christopherson <seanjc@xxxxxxxxxx>
> Cc: Paolo Bonzini <pbonzini@xxxxxxxxxx>
> Cc: Vitaly Kuznetsov <vkuznets@xxxxxxxxxx>
> Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
> Cc: Ingo Molnar <mingo@xxxxxxxxxx>
> Cc: Borislav Petkov <bp@xxxxxxxxx>
> Cc: Dave Hansen <dave.hansen@xxxxxxxxxxxxxxx>
> Cc: "H. Peter Anvin" <hpa@xxxxxxxxx>
> ---
> arch/x86/include/asm/kvm_para.h | 10 +++++-----
> 1 file changed, 5 insertions(+), 5 deletions(-)
>
> diff --git a/arch/x86/include/asm/kvm_para.h b/arch/x86/include/asm/kvm_para.h
> index 57bc74e112f2..519ab5aee250 100644
> --- a/arch/x86/include/asm/kvm_para.h
> +++ b/arch/x86/include/asm/kvm_para.h
> @@ -38,7 +38,7 @@ static inline long kvm_hypercall0(unsigned int nr)
> if (cpu_feature_enabled(X86_FEATURE_TDX_GUEST))
> return tdx_kvm_hypercall(nr, 0, 0, 0, 0);
>
> - asm volatile(KVM_HYPERCALL
> + asm_inline volatile(KVM_HYPERCALL
> : "=a"(ret)
> : "a"(nr)
> : "memory");
> @@ -52,7 +52,7 @@ static inline long kvm_hypercall1(unsigned int nr, unsigned long p1)
> if (cpu_feature_enabled(X86_FEATURE_TDX_GUEST))
> return tdx_kvm_hypercall(nr, p1, 0, 0, 0);
>
> - asm volatile(KVM_HYPERCALL
> + asm_inline volatile(KVM_HYPERCALL
> : "=a"(ret)
> : "a"(nr), "b"(p1)
> : "memory");
> @@ -67,7 +67,7 @@ static inline long kvm_hypercall2(unsigned int nr, unsigned long p1,
> if (cpu_feature_enabled(X86_FEATURE_TDX_GUEST))
> return tdx_kvm_hypercall(nr, p1, p2, 0, 0);
>
> - asm volatile(KVM_HYPERCALL
> + asm_inline volatile(KVM_HYPERCALL
> : "=a"(ret)
> : "a"(nr), "b"(p1), "c"(p2)
> : "memory");
> @@ -82,7 +82,7 @@ static inline long kvm_hypercall3(unsigned int nr, unsigned long p1,
> if (cpu_feature_enabled(X86_FEATURE_TDX_GUEST))
> return tdx_kvm_hypercall(nr, p1, p2, p3, 0);
>
> - asm volatile(KVM_HYPERCALL
> + asm_inline volatile(KVM_HYPERCALL
> : "=a"(ret)
> : "a"(nr), "b"(p1), "c"(p2), "d"(p3)
> : "memory");
> @@ -98,7 +98,7 @@ static inline long kvm_hypercall4(unsigned int nr, unsigned long p1,
> if (cpu_feature_enabled(X86_FEATURE_TDX_GUEST))
> return tdx_kvm_hypercall(nr, p1, p2, p3, p4);
>
> - asm volatile(KVM_HYPERCALL
> + asm_inline volatile(KVM_HYPERCALL
> : "=a"(ret)
> : "a"(nr), "b"(p1), "c"(p2), "d"(p3), "S"(p4)
> : "memory");
> --
> 2.49.0
>