[patch] lowlatency patch for 2.4, lowlatency-2.4.0-test6-B5

From: Ingo Molnar (mingo@elte.hu)
Date: Thu Aug 03 2000 - 08:11:09 EST


i've ported my 2.2 lowlatency patch to 2.4.0-test6-pre1. The vanilla 2.4
kernel fixed some latencies present in 2.2.16, but it also introduced a
few new ones - and it keeps the fundamental latency sources largely
unchanged, so the size and scope of the lowlatency patch has not changed
much:

  http://www.redhat.com/~mingo/lowlatency-patches/lowlatency-2.4.0-test6-B5

this patch is *not* yet intended to be merged into the mainstream kernel.
I'd first like to see what kind of latencies and behavior people see, then
i'll split the patch up into an 'uncontroversial' and 'controversial'
part.

especially due to the VM changes i'd like people to try this, as in my
experience it makes the system much 'smoother' during heavy VM load. The
stock VM creates latencies up to 200 msec (!) on a 256MB box, and 200 msec
can be easily noticed by humans as well.

the patch is a 'take no prisoners' solution, ie. i fixed all latency
sources i could identify, no matter what the fix does to code 'beauty'. I
strongly disagree with the "it's ok in 99.9% of the cases" approach,
because in fact it's very easy to trigger bad latencies under various
(common) workloads. And i just do not want Linux to become another
Windows: "well you can play music just fine, as long as you dont do this
and dont do that, and for God's sake, put enough RAM into your system.".

With this patch applied i was unable to trigger larger than 0.5 msec
latencies even under extreme VM load in 100.0% of the cases - with the
typical latencies in an unloaded system being around 0.1 msec. The patch
fixes some 'scalability latency sources', ie. extreme latencies which show
up only if a process has many open files, has lots of VM allocated.

95% of the conditional schedule points the patch adds fix some real
latency that caused bigger than 1msec latencies under realistic (and
common) workloads.

Main changes:

 - moved conditional_schedule() into assembly, to reduce the impact of
   conditional_schedule(). The 'slow path' is moved into a separate code
   section, so the normal codepath is not impacted. A conditional schedule
   is now typically just 3-4 x86 instructions and no (inline) branch, and
   if 'current' is used in the code already then it's just 2 instructions.
   (see asm-i386/condsched.h for more.)

   my kernel has 332 conditional schedule points in its binary image. A
   condsched slow-path is 22 bytes, so the offline section is ~7k (kernel
   RAM) - sounds acceptable.

 - identified and fixed a couple of new latency sources.

 - the VM's pressure handling code is much less atomic now, and in
   many cases does work in many smaller steps instead of one large step.

reports, comments, suggestions welcome!

        Ingo

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Mon Aug 07 2000 - 21:00:10 EST