Re: Out of memory kernel death

Justin Dossey (lk@walker3-19.reshall.ou.edu)
Thu, 8 May 1997 14:30:27 +0000 ( )


WARNING: I am NOT a kernel expert. However, I can think.

On Thu, 8 May 1997, Tim Hollebeek wrote:
> How do you avoid this? Obviously we want to kill the process that, if
> left alive, would most quickly eat all the available memory if such a
> process exists. We can't predict the future, but we *can* figure out
> who has been allocating memory the fastest in the past and assume this
> is likely to continue. So I suggest killing the process with the
> (highest total size/running time); the X server and other large, but
> long running processes should be quite low on that list. If you want
> an even more accurate heuristic, a bit of overhead can be added to track
> number of page allocations during the last <arbitrary time period> or
> a decaying "memory demand" load average. But those are more complex.

Suppose I were to write a program, that I will call K, that did two
things:

1) check for the existence of a certain process, and if it didn't
exist, spawn that process[a].
2) _gradually_ take up all available memory.

[a] Pick your favorite memory-hog spike program.

Seems to me that a kernel (or a daemon) that killed _the process_ with
the highest total size / running time, would fail to catch K.

A typical scenario:
1. I launch K.
2. K launches [a].
3. Kernel kills [a].
4. K increments memory usage by a few hundred bytes
5. K launches [a].
6. Kernel kills [a].

Etc, Etc, Etc, ad nauseam. At the apex of this memory-increasing
cycle, K would be launching [a] as quickly as it possibly could, and
the kernel would be killing [a] as quickly as it possibly could,
eating CPU like never before. The kernel would _not_, however, kill
K, because it always sees [a] as the process with the highest memory
usage-runtime ratio.

Am I crazy, or would this work?

:::::::::Justin Dossey::::::::::
:dimethyltryptamine@cryogen.com:
:::::::::dossey@ou.edu::::::::::