Re: Memory overcommitting (was Re: http://www.redhat.com/redhat/)

Theodore Y. Ts'o (tytso@MIT.EDU)
Wed, 19 Feb 1997 22:37:14 -0500


Date: Wed, 19 Feb 1997 19:35:02 -0500 (EST)
From: Illuminati Primus <vermont@gate.net>

>From what I understand, this is so that a gigantic process that fork()s
and then exec()s wont fail even if we don't have enough space for another
copy of that gigantic process (when we only really need enough for that
smaller process)... I was wondering, why not make a forkexec() function
that never wastes the time actually forking the parent process, but just
allocates enough for the child? Is there a better way to do it? How much
would this break?

The reason why not is because there is usually some small amount of
processing between the fork() and the exec(). The child process might
close some file descriptors, set up some other file descriptors, set
some signals to be ignored, set up some resource limits, etc.

You could try to anticipate all of these potential (and quite valid)
permutations, and code them into forkexec(), but it ends up getting
very, very, cumbersome, and you end up missing some obvious bit of
functionality anyway....

- Ted