Re: [PATCH v2] x86/cpu: Use SERIALIZE in sync_core() when available

From: Ricardo Neri
Date: Wed Aug 05 2020 - 16:16:15 EST


On Wed, Aug 05, 2020 at 06:48:40AM +0200, Borislav Petkov wrote:
> On Tue, Aug 04, 2020 at 07:10:59PM -0700, Ricardo Neri wrote:
> > The SERIALIZE instruction gives software a way to force the processor to
> > complete all modifications to flags, registers and memory from previous
> > instructions and drain all buffered writes to memory before the next
> > instruction is fetched and executed. Thus, it serves the purpose of
> > sync_core(). Use it when available.
> >
> > Commit 7117f16bf460 ("objtool: Fix ORC vs alternatives") enforced stack
> > invariance in alternatives. The iret-to-self does not comply with such
> > invariance. Thus, it cannot be used inside alternative code. Instead, use
> > an alternative that jumps to SERIALIZE when available.
> >
> > Cc: Andy Lutomirski <luto@xxxxxxxxxx>
> > Cc: Cathy Zhang <cathy.zhang@xxxxxxxxx>
> > Cc: Dave Hansen <dave.hansen@xxxxxxxxxxxxxxx>
> > Cc: Fenghua Yu <fenghua.yu@xxxxxxxxx>
> > Cc: "H. Peter Anvin" <hpa@xxxxxxxxx>
> > Cc: Kyung Min Park <kyung.min.park@xxxxxxxxx>
> > Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
> > Cc: "Ravi V. Shankar" <ravi.v.shankar@xxxxxxxxx>
> > Cc: Sean Christopherson <sean.j.christopherson@xxxxxxxxx>
> > Cc: linux-edac@xxxxxxxxxxxxxxx
> > Cc: linux-kernel@xxxxxxxxxxxxxxx
> > Suggested-by: Andy Lutomirski <luto@xxxxxxxxxx>
> > Signed-off-by: Ricardo Neri <ricardo.neri-calderon@xxxxxxxxxxxxxxx>
> > ---
> > This is a v2 from my initial submission [1]. The first three patches of
> > the series have been merged in Linus' tree. Hence, I am submitting only
> > this patch for review.
> >
> > [1]. https://lkml.org/lkml/2020/7/27/8
> >
> > Changes since v1:
> > * Support SERIALIZE using alternative runtime patching.
> > (Peter Zijlstra, H. Peter Anvin)
> > * Added a note to specify which version of binutils supports SERIALIZE.
> > (Peter Zijlstra)
> > * Verified that (::: "memory") is used. (H. Peter Anvin)
> > ---
> > arch/x86/include/asm/special_insns.h | 2 ++
> > arch/x86/include/asm/sync_core.h | 10 +++++++++-
> > 2 files changed, 11 insertions(+), 1 deletion(-)
> >
> > diff --git a/arch/x86/include/asm/special_insns.h b/arch/x86/include/asm/special_insns.h
> > index 59a3e13204c3..25cd67801dda 100644
> > --- a/arch/x86/include/asm/special_insns.h
> > +++ b/arch/x86/include/asm/special_insns.h
> > @@ -10,6 +10,8 @@
> > #include <linux/irqflags.h>
> > #include <linux/jump_label.h>
> >
> > +/* Instruction opcode for SERIALIZE; supported in binutils >= 2.35. */
> > +#define __ASM_SERIALIZE ".byte 0xf, 0x1, 0xe8"
> > /*
> > * Volatile isn't enough to prevent the compiler from reordering the
> > * read/write functions for the control registers and messing everything up.
> > diff --git a/arch/x86/include/asm/sync_core.h b/arch/x86/include/asm/sync_core.h
> > index fdb5b356e59b..201ea3d9a6bd 100644
> > --- a/arch/x86/include/asm/sync_core.h
> > +++ b/arch/x86/include/asm/sync_core.h
> > @@ -5,15 +5,19 @@
> > #include <linux/preempt.h>
> > #include <asm/processor.h>
> > #include <asm/cpufeature.h>
> > +#include <asm/special_insns.h>
> >
> > #ifdef CONFIG_X86_32
> > static inline void iret_to_self(void)
> > {
> > asm volatile (
> > + ALTERNATIVE("", "jmp 2f", X86_FEATURE_SERIALIZE)
> > "pushfl\n\t"
> > "pushl %%cs\n\t"
> > "pushl $1f\n\t"
> > "iret\n\t"
> > + "2:\n\t"
> > + __ASM_SERIALIZE "\n"
> > "1:"
> > : ASM_CALL_CONSTRAINT : : "memory");
> > }
> > @@ -23,6 +27,7 @@ static inline void iret_to_self(void)
> > unsigned int tmp;
> >
> > asm volatile (
> > + ALTERNATIVE("", "jmp 2f", X86_FEATURE_SERIALIZE)
>
> Why is this and above stuck inside the asm statement?
>
> Why can't you simply do:
>
> if (static_cpu_has(X86_FEATURE_SERIALIZE)) {
> asm volatile(__ASM_SERIALIZE ::: "memory");
> return;
> }
>
> on function entry instead of making it more unreadable for no particular
> reason?

My my first submission I had implemented it as you describe. The only
difference was that I used boot_cpu_has() instead of static_cpu_has()
as the latter has a comment stating that:
"Use static_cpu_has() only in fast paths (...) boot_cpu_has() is
already fast enough for the majority of cases..."

sync_core_before_usermode() already handles what I think are the
critical paths.

Thanks and BR,
Ricardo