[PATCH glibc 2.31 2/5] glibc: sched_getcpu(): use rseq cpu_id TLS on Linux (v5)
From: Mathieu Desnoyers
Date: Wed Aug 07 2019 - 10:27:43 EST
When available, use the cpu_id field from __rseq_abi on Linux to
implement sched_getcpu(). Fall-back on the vgetcpu vDSO if unavailable.
Benchmarks:
x86-64: Intel E5-2630 v3@xxxxxxx, 16-core, hyperthreading
glibc sched_getcpu(): 13.7 ns (baseline)
glibc sched_getcpu() using rseq: 2.5 ns (speedup: 5.5x)
inline load cpuid from __rseq_abi TLS: 0.8 ns (speedup: 17.1x)
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@xxxxxxxxxxxx>
CC: Carlos O'Donell <carlos@xxxxxxxxxx>
CC: Florian Weimer <fweimer@xxxxxxxxxx>
CC: Joseph Myers <joseph@xxxxxxxxxxxxxxxx>
CC: Szabolcs Nagy <szabolcs.nagy@xxxxxxx>
CC: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
CC: Ben Maurer <bmaurer@xxxxxx>
CC: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
CC: "Paul E. McKenney" <paulmck@xxxxxxxxxxxxxxxxxx>
CC: Boqun Feng <boqun.feng@xxxxxxxxx>
CC: Will Deacon <will.deacon@xxxxxxx>
CC: Dave Watson <davejwatson@xxxxxx>
CC: Paul Turner <pjt@xxxxxxxxxx>
CC: libc-alpha@xxxxxxxxxxxxxx
CC: linux-kernel@xxxxxxxxxxxxxxx
CC: linux-api@xxxxxxxxxxxxxxx
---
Changes since v1:
- rseq is only used if both __NR_rseq and RSEQ_SIG are defined.
Changes since v2:
- remove duplicated __rseq_abi extern declaration.
Changes since v3:
- update ChangeLog.
Changes since v4:
- Use atomic_load_relaxed to load the __rseq_abi.cpu_id field, a
consequence of the fact that __rseq_abi is not volatile anymore.
- Include atomic.h which provides atomic_load_relaxed.
---
ChangeLog | 5 +++++
sysdeps/unix/sysv/linux/sched_getcpu.c | 25 +++++++++++++++++++++++--
2 files changed, 28 insertions(+), 2 deletions(-)
diff --git a/ChangeLog b/ChangeLog
index 9a1930248f..128a7b4baa 100644
--- a/ChangeLog
+++ b/ChangeLog
@@ -1,3 +1,8 @@
+2019-08-06 Mathieu Desnoyers <mathieu.desnoyers@xxxxxxxxxxxx>
+
+ * sysdeps/unix/sysv/linux/sched_getcpu.c: use rseq cpu_id TLS on
+ Linux.
+
2019-08-06 Mathieu Desnoyers <mathieu.desnoyers@xxxxxxxxxxxx>
* NEWS: Add Restartable Sequences feature description.
diff --git a/sysdeps/unix/sysv/linux/sched_getcpu.c b/sysdeps/unix/sysv/linux/sched_getcpu.c
index fb0d317f83..01a818f5b0 100644
--- a/sysdeps/unix/sysv/linux/sched_getcpu.c
+++ b/sysdeps/unix/sysv/linux/sched_getcpu.c
@@ -18,14 +18,15 @@
#include <errno.h>
#include <sched.h>
#include <sysdep.h>
+#include <atomic.h>
#ifdef HAVE_GETCPU_VSYSCALL
# define HAVE_VSYSCALL
#endif
#include <sysdep-vdso.h>
-int
-sched_getcpu (void)
+static int
+vsyscall_sched_getcpu (void)
{
#ifdef __NR_getcpu
unsigned int cpu;
@@ -37,3 +38,23 @@ sched_getcpu (void)
return -1;
#endif
}
+
+#ifdef __NR_rseq
+#include <sys/rseq.h>
+#endif
+
+#if defined __NR_rseq && defined RSEQ_SIG
+int
+sched_getcpu (void)
+{
+ int cpu_id = atomic_load_relaxed (&__rseq_abi.cpu_id);
+
+ return cpu_id >= 0 ? cpu_id : vsyscall_sched_getcpu ();
+}
+#else
+int
+sched_getcpu (void)
+{
+ return vsyscall_sched_getcpu ();
+}
+#endif
--
2.17.1