[PATCH] smt-nice-opt 2.6.4-rc1-mm2

From: Con Kolivas
Date: Fri Mar 05 2004 - 22:56:47 EST


-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

This patch optimises the smt-nice branch points

The first hunk removes one unnecessary if() and rearranges the order from
least to most likely.

The second hunk improves the "reschedule sibling task" logic substantially by
only rescheduling it if it is supposed to be put to sleep as well. This
causes far less context switching of low priority tasks.


Consequently the benchmark results are substantial:

up is uniprocessor
mm1 is before the smt nice patch
sn is with smt nice patch
opt is with this optimise patch

Time is in seconds

Concurrent kernel compiles, one make, the other nice +19 make
Nice0 Nice19
up 183 235
mm1 208 211
sn 180 237
opt 178 222

As can be seen the original patch simply changed the performance to that of
running in uniprocessor when there was a nice difference. With this patch the
overall throughput is improved compared to up as is desired by smt
processing.

Con
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.3 (GNU/Linux)

iD8DBQFASUreZUg7+tp6mRURArj2AJ43WSCKOoKTjgXThNmizm1u2+bbGgCfXSWQ
25D55s+73IVd1/ap0QmayZk=
=5rc/
-----END PGP SIGNATURE-----
--- linux-2.6.4-rc1-mm2/kernel/sched.c 2004-03-06 02:17:25.000000000 +1100
+++ linux-2.6.4-rc1-mm2-so/kernel/sched.c 2004-03-06 14:28:10.094227754 +1100
@@ -2002,11 +2002,9 @@ static inline int dependent_sleeper(runq
* task from using an unfair proportion of the
* physical cpu's resources. -ck
*/
- if (p->mm && smt_curr->mm && !rt_task(p) &&
- ((p->static_prio > smt_curr->static_prio &&
- (smt_curr->time_slice * (100 - sd->per_cpu_gain) /
- 100) > task_timeslice(p)) ||
- rt_task(smt_curr)))
+ if (((smt_curr->time_slice * (100 - sd->per_cpu_gain) / 100) >
+ task_timeslice(p) || rt_task(smt_curr)) &&
+ p->mm && smt_curr->mm && !rt_task(p))
ret |= 1;

/*
@@ -2014,9 +2012,9 @@ static inline int dependent_sleeper(runq
* or wake it up if it has been put to sleep for priority
* reasons.
*/
- if ((smt_curr != smt_rq->idle &&
- smt_curr->static_prio > p->static_prio) ||
- (rt_task(p) && !rt_task(smt_curr)) ||
+ if ((((p->time_slice * (100 - sd->per_cpu_gain) / 100) >
+ task_timeslice(smt_curr) || rt_task(p)) &&
+ smt_curr->mm && p->mm && !rt_task(smt_curr)) ||
(smt_curr == smt_rq->idle && smt_rq->nr_running))
resched_task(smt_curr);
}