Re: Default cache_hot_time value back to 10ms

From: Nick Piggin
Date: Wed Oct 06 2004 - 21:28:02 EST


Chen, Kenneth W wrote:
Andrew Morton wrote on Wednesday, October 06, 2004 1:43 PM

"Chen, Kenneth W" <kenneth.w.chen@xxxxxxxxx> wrote:

Secondly, let me ask the question again from the first mail thread: this value
*WAS* 10 ms for a long time, before the domain scheduler. What's so special
about domain scheduler that all the sudden this parameter get changed to 2.5?

So why on earth was it switched from 10 to 2.5 in the first place?

Please resend the final patch.



Here is a patch that revert default cache_hot_time value back to the equivalence
of pre-domain scheduler, which determins task's cache affinity via architecture
defined variable cache_decay_ticks.

This is a mere request that we go back to what *was* before, *NOT* as a new
scheduler tweak (Whatever tweak done for domain scheduler broke traditional/
industry recognized workload).


OK... Well Andrew as I said I'd be happy for this to go in. I'd be *extra*
happy if Judith ran a few of those dbt thingy tests which had been sensitive
to idle time. Can you ask her about that? or should I?

As a side note, I'd like to get involved on future scheduler tuning experiments,
we have fair amount of benchmark environments where we can validate things across
various kind of workload, i.e., db, java, cpu, etc. Thanks.


That would be very welcome indeed. We have a big backlog of scheduler things
to go in after 2.6.9 is released (although not many of them change the runtime
behaviour IIRC). After that, I have some experimental performance work that
could use wider testing. After *that*, the multiprocessor scheduler will in a
state where 2.6 shouldn't need much more work, so we can concentrate on just
tuning the dials.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/