Re: when out of memory, random programs die

Rik van Riel (H.H.vanRiel@phys.uu.nl)
Tue, 8 Dec 1998 08:32:32 +0100 (CET)


On Mon, 7 Dec 1998, Adam Heath wrote:

> I did a 'time make -j zImage modules' on 2.1.131ac4 to test my
> system out today, and relatively early random gcc's, cpp's, and
> make's started dying.

This is because gcc fork()s several times in order to
exec() it's children. In order to fork Linux needs an
8kB (two page) large area.

Normally this is not a problem at all, but when you
severely overload the machine it can sometimes go
wrong and that leads to a failed fork() which in
turn leads to a broken output pipe for cpp and a
fork() failure for gcc.

2.0 is not affected by this problem because it
uses a 4kB kernel stack -- that, however, turned
out not to be enough and causes more problems in
itself than it helps to avoid...

> No other program anywhere was killed, only the ones involved in this
> compile run. What was odd, was that the top level make was killed,
> so the other makes kept running in the background.

This is probably a bug somewhere. Under normal conditions
your system will go all the way down the tree and every
process gets killed. Well, what do you expect when you
stress your system _that_ far :))

cheers,

Rik -- the flu hits, the flu hits, the flu hits -- MORE
+-------------------------------------------------------------------+
| Linux memory management tour guide. H.H.vanRiel@phys.uu.nl |
| Scouting Vries cubscout leader. http://www.phys.uu.nl/~riel/ |
+-------------------------------------------------------------------+

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/