Re: Linux 2.2.15pre12

From: Rik van Riel (riel@conectiva.com.br)
Date: Mon Mar 06 2000 - 12:55:41 EST


On Mon, 6 Mar 2000, Paul Jakma wrote:
> On Mon, 6 Mar 2000, Rik van Riel wrote:
>
> > Without overcommit you'd need to have the 500 MB of swap free
> > that the big simulation is using, even though it'll only use
> > 1 MB for the little process that's being exec()ed...
>
> of course.. yes..
>
> > It's not a vulnerability as such, it's more of a design
> > decision. Overcommit makes sense in a *lot* of situations.
>
> it makes sense, yes, but it means linux is more vulnerable to leaky
> applications than other unices seem to be.

Nope, read on...

> the question is how do you guarantee stability under overload?
> it's no good that linux is more efficient generally under load
> than Unix xyz, if under severe load linux goes OOM and dies when
> Unix xyz manages to keep going.
>
> linux deadlocking because netscape was left running a java applet over a
> weekend does not give it a good reputation for stability. And no measure
> of clever OOM killing heuristics can recover that reputation - because
> Unix xyz doesn't get into that OOM situation in the first place.

On the other Unix all memory will be allocated and you'll
have no room left to run your ps and kill commands to
figure out if it's netscape to kill or something else.

Or to fork instances of your MTA and procmail to receive
your email, or ...

No-overcommit is good for some things, but not for the
situation you just described.

cheers,

Rik

--
The Internet is not a network of computers. It is a network
of people. That is its real strength.

http://www.conectiva.com/ http://www.surriel.com/

- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.rutgers.edu Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Tue Mar 07 2000 - 21:00:20 EST