The current behaviour of task_mm_cid_work is to loop through all
possible CPUs twice to clean up old mm_cid remotely, this can be a waste
of resources especially on tasks with a CPU affinity.
This patch reduces the CPUs involved in the remote CID cleanup carried
on by task_mm_cid_work.
Using the mm_cidmask for the remote cleanup can considerably reduce the
function runtime in highly isolated environments, where each process has
affinity to a single core. Likewise, in the worst case, the mask is
equivalent to all possible CPUs and we don't see any difference with the
current behaviour.
Signed-off-by: Gabriele Monaco <gmonaco@xxxxxxxxxx>
---
kernel/sched/core.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 95e40895a519..57b50b5952fa 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -10553,14 +10553,14 @@ static void task_mm_cid_work(struct callback_head *work)
return;
cidmask = mm_cidmask(mm);
/* Clear cids that were not recently used. */
- for_each_possible_cpu(cpu)
+ for_each_cpu_from(cpu, cidmask)
sched_mm_cid_remote_clear_old(mm, cpu);
weight = cpumask_weight(cidmask);
/*
* Clear cids that are greater or equal to the cidmask weight to
* recompact it.
*/
- for_each_possible_cpu(cpu)
+ for_each_cpu_from(cpu, cidmask)
sched_mm_cid_remote_clear_weight(mm, cpu, weight);
}