Re: [PATCH v2] sched/cache: Reduce the overhead of task_cache_work by only scan the visisted cpus.

From: Luo Gengkun

Date: Sat Apr 18 2026 - 05:02:14 EST




On 2026/4/15 11:10, Chen, Yu C wrote:
Hi Gengkun,

On 4/14/2026 11:07 PM, Luo Gengkun wrote:
The overhead of task_cache_work is high, espeically in multi-NUMA system.
Currently, task_cache_work try to find the pref_llc by scan all cpus in the
system. However, most of these scans are meaningless, such as those for
cpus that have never been visited or were accessed a long time ago.

To address this problem, this patch introduces visited_cpus to track the
visited cpus and uses llc_epoch_visited_timeout to evict cpus that have
timed out.

Signed-off-by: Luo Gengkun <luogengkun2@xxxxxxxxxx>
---
Thanks for the reviews. I've updated the patch based on your feedback.

v2 Changes:
1. Added a pre-check before set/clear visited_cpus to avoid C2C overhead.
2. Optimized llc_epoch_visited_timeout by using a static key to minimize overhead.

Since the visited CPUs optimization should help reduce the scan cost,
I wonder if we should enable it by default, regardless of the timeout
value set by the user. This mainly helps avoid introducing extra debugfs
controls/static key.

I would be happy to do this.


  #ifdef CONFIG_PREEMPT_DYNAMIC
@@ -669,6 +717,8 @@ static __init int sched_init_debug(void)
      llc = debugfs_create_dir("llc_balancing", debugfs_sched);
      debugfs_create_file("enabled", 0644, llc, NULL,
                  &sched_cache_enable_fops);
+    debugfs_create_file("epoch_visited_timeout", 0644, llc, NULL,
+                &sched_cache_timeout_enable_fops);

Is it possible to reuse llc_epoch_affinity_timeout without introducing
epoch_visited_timeout? The idea is that if a task has not run on that CPU
for 10 ms (by default), its footprint will be cleared.

I think this is also acceptable, because visited_timeout is inspired by
affinity_timeout.

[ ... ]

@@ -1736,8 +1746,17 @@ static void task_cache_work(struct callback_head *work)
                  continue;
              for_each_cpu(i, sched_domain_span(sd)) {
-                occ = fraction_mm_sched(cpu_rq(i),
-                            per_cpu_ptr(mm->sc_stat.pcpu_sched, i));
+                struct rq *rq = cpu_rq(i);
+                struct sched_cache_time *pcpu_sched = per_cpu_ptr(mm->sc_stat.pcpu_sched, i);
+                /* Skip the rq that has not been hit for a long time */
+                if (sched_cache_timeout_enabled() &&
+                    cpumask_test_cpu(cpu_of(rq), &mm->sc_stat.visited_cpus) &&

cpumask_test_cpu(i) should be fine. The rq access above doesn't hold cpu_epoch_lock.
I wonder if we can safely calculate rq->cpu_epoch - pcpu_sched->epoch
inside fraction_mm_sched while holding the lock?
Do we really need to access rq->cpu_epoch under the lock for read scenarios?
I noticed task_tick_cache accesses it directly. Plus, moving this access outside
the lock would help reduce lock contention.

thanks,
Luo Gengkun

+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
I'll test your patch after fixing the bug reported by sashiko.dev.
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

thanks,
Chenyu