Re: [PATCH v2 -tip] x86/percpu: Use C for arch_raw_cpu_ptr()

From: Uros Bizjak
Date: Wed Oct 11 2023 - 14:42:34 EST


On Tue, Oct 10, 2023 at 8:52 PM Linus Torvalds
<torvalds@xxxxxxxxxxxxxxxxxxxx> wrote:
>
> On Tue, 10 Oct 2023 at 11:41, Uros Bizjak <ubizjak@xxxxxxxxx> wrote:
> >
> > Yes, but does it CSE the load from multiple addresses?
>
> Yes, it should do that just right, because the *asm* itself is
> identical, just the offsets (that gcc then adds separately) would be
> different.
>
> This is not unlike how we depend on gcc CSE'ing the "current" part
> when doing multiple accesses of different members off that:
>
> static __always_inline struct task_struct *get_current(void)
> {
> return this_cpu_read_stable(pcpu_hot.current_task);
> }
>
> with this_cpu_read_stable() being an inline asm that lacks the memory
> component (the same way the fallback hides it by just using
> "%%gs:this_cpu_off" directly inside the asm, instead of exposing it as
> a memory access to gcc).
>
> Of course, I think that with the "__seg_gs" patches, we *could* expose
> the "%%gs:this_cpu_off" part to gcc, since gcc hopefully then can do
> the alias analysis on that side and see that it can CSE the thing
> anyway.
>
> That might be a better choice than __FORCE_ORDER, in fact.
>
> IOW, something like
>
> static __always_inline unsigned long new_cpu_offset(void)
> {
> unsigned long res;
> asm(ALTERNATIVE(
> "movq " __percpu_arg(1) ",%0",
> "rdgsbase %0",
> X86_FEATURE_FSGSBASE)
> : "=r" (res)
> : "m" (this_cpu_off));
> return res;
> }
>
> would presumably work together with your __seg_gs stuff.
>
> UNTESTED!!

The attached patch was tested on a target with fsgsbase CPUID and
without it. It works!

The patch improves amd_pmu_enable_virt() in the same way as reported
in the original patch submission and also reduces the number of percpu
offset reads (either from this_cpu_off or with rdgsbase) from 1663 to
1571.

The only drawback is a larger binary size:

text data bss dec hex filename
25546594 4387686 808452 30742732 1d518cc vmlinux-new.o
25515256 4387814 808452 30711522 1d49ee2 vmlinux-old.o

that increases by 31k (0.123%), probably due to 1578 rdgsbase alternatives.

I'll prepare and submit a patch for tip/percpu branch.

Uros.


>
> Linus
diff --git a/arch/x86/include/asm/percpu.h b/arch/x86/include/asm/percpu.h
index 34734d730463..8450fe4a2753 100644
--- a/arch/x86/include/asm/percpu.h
+++ b/arch/x86/include/asm/percpu.h
@@ -31,18 +31,32 @@
#define __percpu_prefix "%%"__stringify(__percpu_seg)":"
#define __my_cpu_offset this_cpu_read(this_cpu_off)

-/*
- * Compared to the generic __my_cpu_offset version, the following
- * saves one instruction and avoids clobbering a temp register.
- */
-#define arch_raw_cpu_ptr(ptr) \
-({ \
- unsigned long tcp_ptr__; \
- asm ("add " __percpu_arg(1) ", %0" \
- : "=r" (tcp_ptr__) \
- : "m" (this_cpu_off), "0" (ptr)); \
- (typeof(*(ptr)) __kernel __force *)tcp_ptr__; \
+#ifdef CONFIG_X86_64
+#define arch_raw_cpu_ptr(ptr) \
+({ \
+ unsigned long tcp_ptr__; \
+ asm (ALTERNATIVE("movq " __percpu_arg(1) ", %0", \
+ "rdgsbase %0", \
+ X86_FEATURE_FSGSBASE) \
+ : "=r" (tcp_ptr__) \
+ : "m" (this_cpu_off)); \
+ \
+ tcp_ptr__ += (unsigned long)(ptr); \
+ (typeof(*(ptr)) __kernel __force *)tcp_ptr__; \
})
+#else /* CONFIG_X86_64 */
+#define arch_raw_cpu_ptr(ptr) \
+({ \
+ unsigned long tcp_ptr__; \
+ asm ("movl " __percpu_arg(1) ", %0" \
+ : "=r" (tcp_ptr__) \
+ : "m" (this_cpu_off)); \
+ \
+ tcp_ptr__ += (unsigned long)(ptr); \
+ (typeof(*(ptr)) __kernel __force *)tcp_ptr__; \
+})
+#endif /* CONFIG_X86_64 */
+
#else
#define __percpu_prefix ""
#endif