Using the SDM1.1 057.SDET benchmark I obtained the following.
Reference 2.2.11:
scripts throughput
32 6133
64 5599
96 4978
128 4387
150 4044
Patched 2.2.11:
scripts throughput
32 6106
64 5515
96 4928
128 4387
150 4022
*these results were averaged over 20 runs and run under identical
system states.
Using make I obtained the following:
Reference 2.2.11:
Real: 77.82
User: 198.26
Sys: 20.52
Patched 2.2.11:
Real: 77.94
User: 199.44
Sys: 18.70
As you can see, I found a (slight) performance decrease according to
the benchmark.
I agree with your conceptual view of the situation, so this is a little
bit of a surprise. Are these the same benchmark results you came up
with, or did I mismeasure somehow?
Also, it looks to me like you removed some code in reschedule_idle
necessary for good real-time performace, in particular, evaluating the
preemptive_goodness of a task vs all of the tasks on all other CPU's.
>From where I sit, it looks like your implementation evaluates the
preemptive goodness on the best_cpu only after checking avg_timeslice
vs. cache flush time. The potential impact of this on real-time
performance might be pretty drastic -- what do you think?
Maybe it would be worthwhile to evaluate avg_timeslice vs cacheflush
time in goodness?
-JCN
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/