The situation we do *not* want to get into is where we kill a process that
was not causing the problem, the problem process eats up even more memory,
then we kill another process that is not the problem, .... ad nauseum.
How do you avoid this? Obviously we want to kill the process that, if
left alive, would most quickly eat all the available memory if such a
process exists. We can't predict the future, but we *can* figure out
who has been allocating memory the fastest in the past and assume this
is likely to continue. So I suggest killing the process with the
(highest total size/running time); the X server and other large, but
long running processes should be quite low on that list. If you want
an even more accurate heuristic, a bit of overhead can be added to track
number of page allocations during the last <arbitrary time period> or
a decaying "memory demand" load average. But those are more complex.
---------------------------------------------------------------------------
Tim Hollebeek | Disclaimer :=> Everything above is a true statement,
Electron Psychologist | for sufficiently false values of true.
Princeton University | email: tim@wfn-shop.princeton.edu
----------------------| http://wfn-shop.princeton.edu/~tim (NEW! IMPROVED!)