Re: [PATCH v2 15/15] x86/lib/memcpy_64.S: Convert memcpy to ALTERNATIVE_2 macro
From: Ingo Molnar
Date: Wed Mar 04 2015 - 02:26:45 EST
* Borislav Petkov <bp@xxxxxxxxx> wrote:
> From: Borislav Petkov <bp@xxxxxxx>
>
> Make REP_GOOD variant the default after alternatives have run.
>
> Signed-off-by: Borislav Petkov <bp@xxxxxxx>
> ---
> arch/x86/lib/memcpy_64.S | 68 +++++++++++++++---------------------------------
> 1 file changed, 21 insertions(+), 47 deletions(-)
>
> diff --git a/arch/x86/lib/memcpy_64.S b/arch/x86/lib/memcpy_64.S
> index bbfdacc01760..b046664f5a1c 100644
> --- a/arch/x86/lib/memcpy_64.S
> +++ b/arch/x86/lib/memcpy_64.S
> @@ -1,12 +1,20 @@
> /* Copyright 2002 Andi Kleen */
>
> #include <linux/linkage.h>
> -
> #include <asm/cpufeature.h>
> #include <asm/dwarf2.h>
> #include <asm/alternative-asm.h>
>
> /*
> + * We build a jump to memcpy_orig by default which gets NOPped out on
> + * the majority of x86 CPUs which set REP_GOOD. In addition, CPUs which
> + * have the enhanced REP MOVSB/STOSB feature (ERMS), change those NOPs
> + * to a jmp to memcpy_erms which does the REP; MOVSB mem copy.
> + */
> +
> +.weak memcpy
> +
> +/*
> * memcpy - Copy a memory block.
> *
> * Input:
> @@ -17,15 +25,11 @@
> * Output:
> * rax original destination
> */
> +ENTRY(__memcpy)
> +ENTRY(memcpy)
> + ALTERNATIVE_2 "jmp memcpy_orig", "", X86_FEATURE_REP_GOOD, \
> + "jmp memcpy_erms", X86_FEATURE_ERMS
>
> +ENDPROC(memcpy)
> +ENDPROC(__memcpy)
> +ENTRY(memcpy_erms)
> movq %rdi, %rax
> movq %rdx, %rcx
> rep movsb
> ret
Since most CPUs we care about have ERMS, wouldn't it be better to
patch in the actual memcpy_erms sequence into the primary memcpy()
function? It's just about 9 bytes AFAICT.
This would remove a jump instruction from the most common memcpy
variant: worth it!
Thanks,
Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/