[PATCH] sched: recover sched_yield task running time increase

From: Alex Shi
Date: Tue Apr 05 2011 - 21:05:01 EST


commit ac53db596cc08ecb8040c removed the sched_yield task running
time increase, so the yielded task get more opportunity to be launch
again. That may not the caller want to be. And this also causes
volano benchmark drop 50~80 percent performance on core2/NHM/WSM
machines. This patch recover the sched_yield task vruntime up.

Signed-off-by: alex.shi@xxxxxxxxx
---
kernel/sched_fair.c | 18 +++++++++++++++++-
1 files changed, 17 insertions(+), 1 deletions(-)

diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c
index 3f7ec9e..04d58bb 100644
--- a/kernel/sched_fair.c
+++ b/kernel/sched_fair.c
@@ -1956,7 +1956,7 @@ static void yield_task_fair(struct rq *rq)
{
struct task_struct *curr = rq->curr;
struct cfs_rq *cfs_rq = task_cfs_rq(curr);
- struct sched_entity *se = &curr->se;
+ struct sched_entity *se = &curr->se, *rightmost;

/*
* Are we the only task in the tree?
@@ -1975,6 +1975,22 @@ static void yield_task_fair(struct rq *rq)
}

set_skip_buddy(se);
+ /*
+ * Find the rightmost entry in the rbtree:
+ */
+ rightmost = __pick_last_entity(cfs_rq);
+ /*
+ * Already in the rightmost position?
+ */
+ if (unlikely(!rightmost || entity_before(rightmost, se)))
+ return;
+
+ /*
+ * Minimally necessary key value to be last in the tree:
+ * Upon rescheduling, sched_class::put_prev_task() will place
+ * 'current' within the tree based on its new key value.
+ */
+ se->vruntime = rightmost->vruntime + 1;
}

static bool yield_to_task_fair(struct rq *rq, struct task_struct *p, bool preempt)
--
1.6.0

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/