Re: [RFC patch 1/2] sched: dynamically adapt granularity withnr_running

From: Peter Zijlstra
Date: Sat Sep 11 2010 - 14:58:34 EST


On Sat, 2010-09-11 at 13:37 -0400, Mathieu Desnoyers wrote:

Its not at all clear what or why you're doing what exactly.

What we used to have is:

period -- time in which each task gets scheduled once

This period was adaptive in that we had an ideal period
(sysctl_sched_latency), but since keeping to this means that each task
gets latency/nr_running time. This is undesired in that it means busy
systems will over-schedule due to tiny slices. Hence we also had a
minimum slice (sysctl_sched_min_granularity).

This yields:

period := max(sched_latency, nr_running * sched_min_granularity)

[ where we introduce the intermediate:
nr_latency := sched_latency / sched_min_granularity
in order to avoid the multiplication where possible ]

Now you introduce a separate preemption measure, sched_gran as:

sched_std_granularity; nr_running <= 8
sched_gran := {
max(sched_min_granularity, sched_latency / nr_running)

Which doesn't make any sense at all, because it will either be larger or
as large as the current sched_min_granularity.

And you break the above definition of period by replacing nr_latency by
8.

Not at all charmed, this look like random changes without conceptual
integrity.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/