[PATCH] sched/task_group: Re-layout structure to reduce false sharing
From: Deng Pan
Date: Thu May 25 2023 - 22:49:09 EST
When running UnixBench/Pipe-based Context Switching case, we observed
high false sharing for accessing ‘load_avg’ against rt_se and rt_rq.
Pipe-based Context Switching case is a typical suspend/wakeup scenario,
in which load_avg is frequenly loaded and stored, at the meantime,
rt_se and rt_rq are frequently loaded. Unfortunately, they are in the
same cacheline.
This change re-layouts the structure:
1. Move rt_se and rt_rq to a 2nd cacheline.
2. Keep ‘parent’ field in the 2nd cacheline since it is also accessed
very often when cgroups are nested, thanks Tim Chen for providing the
insight.
With this change, on Intel Icelake 2 sockets 80C/160T platform, based
on v6.0-rc6, the 160 parallel score is improved ~5%, perf tool reported
rt_se and rt_rq access cycles are reduced from ~6.0% to ~0.1%.
Reviewed-by: Tim Chen <tim.c.chen@xxxxxxxxxxxxxxx>
Signed-off-by: Deng Pan <pan.deng@xxxxxxxxx>
---
kernel/sched/sched.h | 14 +++++++-------
1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index ec7b3e0a2b20..a1dd289511b2 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -389,6 +389,13 @@ struct task_group {
#endif
#endif
+ struct rcu_head rcu;
+ struct list_head list;
+
+ struct list_head siblings;
+ struct list_head children;
+ struct task_group *parent;
+
#ifdef CONFIG_RT_GROUP_SCHED
struct sched_rt_entity **rt_se;
struct rt_rq **rt_rq;
@@ -396,13 +403,6 @@ struct task_group {
struct rt_bandwidth rt_bandwidth;
#endif
- struct rcu_head rcu;
- struct list_head list;
-
- struct task_group *parent;
- struct list_head siblings;
- struct list_head children;
-
#ifdef CONFIG_SCHED_AUTOGROUP
struct autogroup *autogroup;
#endif
--
2.39.1