BFS 420: cleanup try_preempt

From: Hillf Danton
Date: Thu May 17 2012 - 08:47:50 EST


First the cpumask on stack is removed.

Then before scanning online CPUs, highest_prio and latest_deadline are inited
with values of the given task, for shortening the scan as much as we could.

And if we get highest_prio_rq, its current task is rescheduled without bothering
calling can_preempt, as we already do the checks when finding the
target runqueue.


--- a/kernel/sched/bfs.c Mon May 14 20:50:38 2012
+++ b/kernel/sched/bfs.c Thu May 17 20:35:46 2012
@@ -1423,7 +1423,6 @@ static void try_preempt(struct task_stru
struct rq *highest_prio_rq = NULL;
int cpu, highest_prio;
u64 latest_deadline;
- cpumask_t tmp;

/*
* We clear the sticky flag here because for a task to have called
@@ -1441,14 +1440,10 @@ static void try_preempt(struct task_stru
if (p->policy == SCHED_IDLEPRIO)
return;

- if (likely(online_cpus(p)))
- cpus_and(tmp, cpu_online_map, p->cpus_allowed);
- else
- return;
-
- highest_prio = latest_deadline = 0;
+ highest_prio = p->prio;
+ latest_deadline = p->deadline;

- for_each_cpu_mask(cpu, tmp) {
+ for_each_cpu_and(cpu, cpu_online_map, p->cpus_allowed) {
struct rq *rq;
int rq_prio;

@@ -1465,10 +1460,8 @@ static void try_preempt(struct task_stru
}
}

- if (likely(highest_prio_rq)) {
- if (can_preempt(p, highest_prio, highest_prio_rq->rq_deadline))
- resched_task(highest_prio_rq->curr);
- }
+ if (highest_prio_rq)
+ resched_task(highest_prio_rq->curr);
}
#else /* CONFIG_SMP */
static inline bool needs_other_cpu(struct task_struct *p, int cpu)
--
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/