On Wed, Jul 12, 2023 at 06:47:26PM +0800, Abel Wu wrote:
+ *
+ * HOW
+ * ===
+ *
+ * An shared_runq is comprised of a list, and a spinlock for synchronization.
+ * Given that the critical section for a shared_runq is typically a fast list
+ * operation, and that the shared_runq is localized to a single LLC, the
+ * spinlock will typically only be contended on workloads that do little else
+ * other than hammer the runqueue.
Would there be scalability issues on large LLCs?
See the next patch in the series [0] where we shard the per-LLC shared
runqueues to avoid contention.
[0]: https://lore.kernel.org/lkml/20230710200342.358255-7-void@xxxxxxxxxxxxx/
+
+ task_rq_unlock(src_rq, p, &src_rf);
+
+ raw_spin_rq_lock(rq);
+ rq_repin_lock(rq, rf);
By making it looks more ugly, we can save some cycles..
if (src_rq != rq) {
task_rq_unlock(src_rq, p, &src_rf);
} else {
rq_unpin_lock(src_rq, src_rf);
raw_spin_unlock_irqrestore(&p->pi_lock, src_rf.flags);
rq_repin_lock(rq, rf);
}