Re: [PATCH v2 -tip] x86/percpu: Use C for arch_raw_cpu_ptr()

From: Nadav Amit
Date: Thu Oct 12 2023 - 11:19:11 EST



> On Oct 12, 2023, at 12:54 AM, Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx> wrote:
>
> !! External Email
>
> On Wed, 11 Oct 2023 at 14:33, Uros Bizjak <ubizjak@xxxxxxxxx> wrote:
>>
>> Reading the above, it looks to me that we don't want to play games
>> with "const aliased" versions of current_task [1], as proposed by
>> Nadav in his patch series.
>
> Well, maybe I'd like it if I saw what the effect of it was, but that
> patch mentions "sync_mm_rss()" which doesn't actually exist
> (SPLIT_RSS_COUNTING is never defined, the split version is gone and
> hasn't existed since commit f1a7941243c1 "mm: convert mm's rss stats
> into percpu_counter")

So I added a new version of the current aliasing (well, actually pcpu_hot
in the new version) on top of Uros’s patches, and the effect can be seen
in many functions. I don’t want to bother with many examples so here is
a common and simple one:

Currently syscall_exit_work() that starts with:

0xffffffff8111e120 <+0>: push %rbp
0xffffffff8111e121 <+1>: mov %rdi,%rbp
0xffffffff8111e124 <+4>: push %rbx
0xffffffff8111e125 <+5>: mov %rsi,%rbx
0xffffffff8111e128 <+8>: and $0x20,%esi
0xffffffff8111e12b <+11>: je 0xffffffff8111e143 <syscall_exit_work+35>
0xffffffff8111e12d <+13>: mov %gs:0x2ac80,%rax
0xffffffff8111e136 <+22>: cmpb $0x0,0x800(%rax)
0xffffffff8111e13d <+29>: jne 0xffffffff8111e22a <syscall_exit_work+266>
0xffffffff8111e143 <+35>: mov %gs:0x2ac80,%rax
0xffffffff8111e14c <+44>: cmpq $0x0,0x7c8(%rax)

Using the const-alias changes the beginning of syscall_exit_work to:

0xffffffff8111cb80 <+0>: push %r12
0xffffffff8111cb82 <+2>: mov %gs:0x7ef0e0f6(%rip),%r12 # 0x2ac80 <pcpu_hot>
0xffffffff8111cb8a <+10>: push %rbp
0xffffffff8111cb8b <+11>: mov %rdi,%rbp
0xffffffff8111cb8e <+14>: push %rbx
0xffffffff8111cb8f <+15>: mov %rsi,%rbx
0xffffffff8111cb92 <+18>: and $0x20,%esi
0xffffffff8111cb95 <+21>: je 0xffffffff8111cba6 <syscall_exit_work+38>
0xffffffff8111cb97 <+23>: cmpb $0x0,0x800(%r12)
0xffffffff8111cba0 <+32>: jne 0xffffffff8111cc7a <syscall_exit_work+250>
0xffffffff8111cba6 <+38>: cmpq $0x0,0x7c8(%r12)

So we both see RIP-relative addressing is being used (hence the instruction is
one byte shorter) and the reload going away.

Now, I am not a compiler expert as for the rationale, but it googling around
I can see Nick explaining the rationale [1] - if you use “p” your read memory.
BTW: It is related to discussion you had [2], in which you encountered an issue
I also encountered before [3]. My bad for pushing it in.

Anyhow, I created a similar code on godbolt ( https://godbolt.org/z/dPqKKzPs4 )
to show this behavior - how compiler barriers cause reload. It seems that this
behavior happens on GCC and CLANG on various versions.

The idea behind the patch is that it communicates - in the compilation unit
granularity - that current is fixed. There is an issue of whether it works with
LTO, which I have never checked.


[1] https://reviews.llvm.org/D145416
[2] https://lore.kernel.org/lkml/20230306120106.GE1267364@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/
[3] https://lore.kernel.org/all/20190823224424.15296-5-namit@xxxxxxxxxx/

--

Here’s the updated patch - but I didn’t really boot a machine with it so new
issues might have come since my last patch-set:

-- >8 --

Date: Thu, 12 Oct 2023 06:02:03 -0700
Subject: [PATCH] Const current

---
arch/x86/include/asm/current.h | 17 ++++++++++++++++-
arch/x86/kernel/cpu/common.c | 4 ++++
include/linux/compiler.h | 2 +-
3 files changed, 21 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/current.h b/arch/x86/include/asm/current.h
index a1168e7b69e5..d05fbb6a8bd7 100644
--- a/arch/x86/include/asm/current.h
+++ b/arch/x86/include/asm/current.h
@@ -36,9 +36,24 @@ static_assert(sizeof(struct pcpu_hot) == 64);

DECLARE_PER_CPU_ALIGNED(struct pcpu_hot, pcpu_hot);

+/*
+ * Hold a constant alias for current_task, which would allow to avoid caching of
+ * current task.
+ *
+ * We must mark const_current_task with the segment qualifiers, as otherwise gcc
+ * would do redundant reads of const_current_task.
+ */
+DECLARE_PER_CPU(struct pcpu_hot const __percpu_seg_override, const_pcpu_hot);
+
static __always_inline struct task_struct *get_current(void)
{
- return this_cpu_read_stable(pcpu_hot.current_task);
+
+ /*
+ * GCC is missing functionality of removing segment qualifiers, which
+ * messes with per-cpu infrastructure that holds local copies. Use
+ * __raw_cpu_read to avoid holding any copy.
+ */
+ return __raw_cpu_read(, const_pcpu_hot.current_task);
}

#define current get_current()
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 382d4e6b848d..94590af11388 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -2052,6 +2052,10 @@ DEFINE_PER_CPU_ALIGNED(struct pcpu_hot, pcpu_hot) = {
};
EXPORT_PER_CPU_SYMBOL(pcpu_hot);

+DECLARE_PER_CPU_ALIGNED(struct pcpu_hot const __percpu_seg_override, const_pcpu_hot)
+ __attribute__((alias("pcpu_hot")));
+EXPORT_PER_CPU_SYMBOL(const_pcpu_hot);
+
#ifdef CONFIG_X86_64
DEFINE_PER_CPU_FIRST(struct fixed_percpu_data,
fixed_percpu_data) __aligned(PAGE_SIZE) __visible;
diff --git a/include/linux/compiler.h b/include/linux/compiler.h
index d7779a18b24f..e7059292085e 100644
--- a/include/linux/compiler.h
+++ b/include/linux/compiler.h
@@ -212,7 +212,7 @@ void ftrace_likely_update(struct ftrace_likely_data *f, int val,
*/
#define ___ADDRESSABLE(sym, __attrs) \
static void * __used __attrs \
- __UNIQUE_ID(__PASTE(__addressable_,sym)) = (void *)&sym;
+ __UNIQUE_ID(__PASTE(__addressable_,sym)) = (void *)(uintptr_t)&sym;
#define __ADDRESSABLE(sym) \
___ADDRESSABLE(sym, __section(".discard.addressable"))

--
2.25.1