RE: Killing/balancing processes when overcommited

From: Jim Sibley (jimsibley@earthlink.net)
Date: Fri Sep 13 2002 - 16:13:34 EST


Tim wrote:
> There is another solution. And that is never >allocate memory unless
>you have swap space. Yes, the issue is that you >need to have lots of
>disk allocated to swap but on a big machine you >will have that space.

How do you predict if a program is going to ask for more memory? Maybe it only
needs additional memory for a short time and is a good citizen and gives it
back?

How much disk needs to be allocated for swap? In 32 bit, each logged in user
is limited to 2 GB, so do we need 2 GB for each logged on user? 250 users, 500
GB of disk?

In a 64 bit system, how much swap would you reserve?

Actually, another OS took this approach in the early 70's and this was quickly
junked when they found out how much disk they really had to keep in reserve
for paging.

> This way the offending process that asks for >more memory will be
>the one that gets killed. Even if the 1st couple >of ones aren't the
>misbehaving process eventually it will ask for >more memory and suffer
>process execution.

It may not be the "offending process" that is asking for more memory. How do
you judge "offending?"

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Sun Sep 15 2002 - 22:00:35 EST