On Tue, 2023-09-05 at 13:11 -0400, Mathieu Desnoyers wrote:
Rate limit migrations to 1 migration per 2 milliseconds per task. On a
kernel with EEVDF scheduler (commit b97d64c722598ffed42ece814a2cb791336c6679),
this speeds up hackbench from 62s to 45s on AMD EPYC 192-core (over 2 sockets).
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 479db611f46e..0d294fce261d 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -4510,6 +4510,7 @@ static void __sched_fork(unsigned long clone_flags, struct task_struct *p)
p->se.vruntime = 0;
p->se.vlag = 0;
p->se.slice = sysctl_sched_base_slice;
+ p->se.next_migration_time = 0;
It seems like the next_migration_time should be initialized to the current time,
in case the system run for a long time and clock wrap around could cause problem.
INIT_LIST_HEAD(&p->se.group_node);
#ifdef CONFIG_FAIR_GROUP_SCHED
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index d92da2d78774..24ac69913005 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -960,6 +960,14 @@ int sched_update_scaling(void)
static void clear_buddies(struct cfs_rq *cfs_rq, struct sched_entity *se);
+static bool should_migrate_task(struct task_struct *p, int prev_cpu)
+{
+ /* Rate limit task migration. */
+ if (sched_clock_cpu(prev_cpu) < p->se.next_migration_time)
Should we use time_before(sched_clock_cpu(prev_cpu), p->se.next_migration_time) ?
+ return false;
+ return true;
+}
+
Thanks.
Tim