On 12/12/2019 15:24, Peter Zijlstra wrote:
On Thu, Dec 12, 2019 at 10:41:02PM +0800, Cheng Jian wrote:That looks sane enough. I'd only argue the changelog should directly point
Fixes: 1ad3aaf3fcd2 ("sched/core: Implement new approach to scale select_idle_cpu()")The 'funny' thing is that select_idle_core() actually does the right
thing.
Copying that should work:
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 08a233e97a01..416d574dcebf 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5828,6 +5837,7 @@ static inline int select_idle_smt(struct task_struct *p, int target)
*/
static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int target)
{
+ struct cpumask *cpus = this_cpu_cpumask_var_ptr(select_idle_mask);
struct sched_domain *this_sd;
u64 avg_cost, avg_idle;
u64 time, cost;
@@ -5859,11 +5869,11 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t
time = cpu_clock(this);
- for_each_cpu_wrap(cpu, sched_domain_span(sd), target) {
+ cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
+
+ for_each_cpu_wrap(cpu, cpus, target) {
if (!--nr)
return si_cpu;
- if (!cpumask_test_cpu(cpu, p->cpus_ptr))
- continue;
if (available_idle_cpu(cpu))
break;
if (si_cpu == -1 && sched_idle_cpu(cpu))
out that the issue is we consume some CPUs out of 'nr' that are not allowed
for the task and thus waste our attempts.
.