Re: Network slowdown due to CFS

From: Jarek Poplawski
Date: Wed Oct 03 2007 - 04:00:09 EST


On 02-10-2007 08:06, Ingo Molnar wrote:
> * David Schwartz <davids@xxxxxxxxxxxxx> wrote:
...
>> I'm not familiar enough with CFS' internals to help much on the
>> implementation, but there may be some simple compromise yield that
>> might work well enough. How about simply acting as if the task used up
>> its timeslice and scheduling the next one? (Possibly with a slight
>> reduction in penalty or reward for not really using all the time, if
>> possible?)
>
> firstly, there's no notion of "timeslices" in CFS. (in CFS tasks "earn"
> a right to the CPU, and that "right" is not sliced in the traditional
> sense) But we tried a conceptually similar thing [...]

>From kernel/sched_fair.c:

"/*
* Targeted preemption latency for CPU-bound tasks:
* (default: 20ms, units: nanoseconds)
*
* NOTE: this latency value is not the same as the concept of
* 'timeslice length' - timeslices in CFS are of variable length.
* (to see the precise effective timeslice length of your workload,
* run vmstat and monitor the context-switches field)
..."

So, no notion of something, which are(!) of variable length, and which
precise effective timeslice lenght can be seen in nanoseconds? (But
not timeslice!)

Well, I start to think, this new scheduler could be too simple yet...


> [...] [ and this is driven by compatibility
> goals - regardless of how broken we consider yield use. The ideal
> solution is of course to almost never use yield. Fortunately 99%+ of
> Linux apps follow that ideal solution ;-) ]

Nevertheless, it seems, this 1% is important enough to boast a little:

"( another detail: due to nanosec accounting and timeline sorting,
sched_yield() support is very simple under CFS, and in fact under
CFS sched_yield() behaves much better than under any other
scheduler i have tested so far. )"
[Documentation/sched-design-CFS.txt]

Cheers,
Jarek P.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/