On Tue, Jan 30, 2018 at 11:47 AM, Rohit Jain <rohit.k.jain@xxxxxxxxxx> wrote:
[...]
Do your tests show a difference in results though with such change@@ -6102,7 +6107,8 @@ static int select_idle_core(struct task_struct *p,At the SMT level, do you need to bother with choosing best capacity
struct sched_domain *sd, int
*/
static int select_idle_smt(struct task_struct *p, struct sched_domain
*sd, int target)
{
- int cpu;
+ int cpu, rcpu = -1;
+ unsigned long max_cap = 0;
if (!static_branch_likely(&sched_smt_present))
return -1;
@@ -6110,11 +6116,13 @@ static int select_idle_smt(struct task_struct *p,
struct sched_domain *sd, int t
for_each_cpu(cpu, cpu_smt_mask(target)) {
if (!cpumask_test_cpu(cpu, &p->cpus_allowed))
continue;
- if (idle_cpu(cpu))
- return cpu;
+ if (idle_cpu(cpu) && (capacity_of(cpu) > max_cap)) {
+ max_cap = capacity_of(cpu);
+ rcpu = cpu;
among threads? If RT is eating into one of the SMT thread's underlying
capacity, it would eat into the other's. Wondering what's the benefit
of doing this here.
Yes, you are right because of SD_SHARE_CPUCAPACITY, however the benefit
is that if don't do this check, we might end up picking a SMT thread
which has "high" RT/IRQ activity and be on the run queue for a while,
till the pull side can bail us out.
(for select_idle_smt)?
thanks,
- Joel