Re: [PATCH v2 -tip] x86/percpu: Use C for arch_raw_cpu_ptr()

From: Uros Bizjak
Date: Wed Oct 11 2023 - 17:33:06 EST


On Wed, Oct 11, 2023 at 9:37 PM Linus Torvalds
<torvalds@xxxxxxxxxxxxxxxxxxxx> wrote:
>
> On Wed, 11 Oct 2023 at 00:42, Nadav Amit <namit@xxxxxxxxxx> wrote:
> >
> > You are correct. Having said that, for “current" we may be able to do something
> > better, as regardless to preemption “current" remains the same, and
> > this_cpu_read_stable() does miss some opportunities to avoid reloading the
> > value from memory.
>
> It would be lovely to generate even better code, but that
> this_cpu_read_stable() thing is the best we've come up with. It
> intentionally has *no* memory inputs or anything else that might make
> gcc think "I need to re-do this".

The attached patch makes this_cpu_read_stable a bit better by using
rip-relative addressing. Immediate reduction of text section by 4kB
and also makes the kernel some more PIE friendly.

> For example, instead of using "m" as a memory input, it very
> intentionally uses "p", to make it clear that that it just uses the
> _pointer_, not the memory location itself.
>
> That's obviously a lie - it actually does access memory - but it's a
> lie exactly because of the reason you mention: even when the memory
> location changes due to preemption (or explicit scheduling), it always
> changes back to the the value we care about.
>
> So gcc _should_ be able to CSE it in all situations, but it's entirely
> possible that gcc then decides to re-generate the value for whatever
> reason. It's a cheap op, so it's ok to regen, of course, but the
> intent is basically to let the compiler re-use the value as much as
> possible.
>
> But it *is* probably better to regenerate the value than it would be
> to spill and re-load it, and from the cases I've seen, this all tends
> to work fairly well.

Reading the above, it looks to me that we don't want to play games
with "const aliased" versions of current_task [1], as proposed by
Nadav in his patch series. The current version of
this_cpu_read_stable() (plus the attached trivial patch) is as good as
it can get.

[1] https://lore.kernel.org/lkml/20190823224424.15296-8-namit@xxxxxxxxxx/

Uros.
diff --git a/arch/x86/include/asm/percpu.h b/arch/x86/include/asm/percpu.h
index e047a0bc5554..b74169434b85 100644
--- a/arch/x86/include/asm/percpu.h
+++ b/arch/x86/include/asm/percpu.h
@@ -4,8 +4,10 @@

#ifdef CONFIG_X86_64
#define __percpu_seg gs
+#define __percpu_rip "(%%rip)"
#else
#define __percpu_seg fs
+#define __percpu_rip ""
#endif

#ifdef __ASSEMBLY__
@@ -85,7 +87,7 @@
#define __my_cpu_ptr(ptr) (__my_cpu_type(*ptr) *)(uintptr_t)(ptr)
#define __my_cpu_var(var) (*__my_cpu_ptr(&var))
#define __percpu_arg(x) __percpu_prefix "%" #x
-#define __force_percpu_arg(x) __force_percpu_prefix "%" #x
+#define __force_percpu_arg(x) __force_percpu_prefix "%" #x __percpu_rip

/*
* Initialized pointers to per-cpu variables needed for the boot