SUICIDE! No wonder Linux gets such a rap for being unreliable. (If this is
truely how things work. Someone please tell me this isn't so.)
As a programmer I expect that when I have sucessfully requested memory to
be allocated, that it really has happened. It now appears that on top of
everything else that writing to memory at the wrong time, I could run of
virtual memory.
Most systems actually work this way, both in the computer world and in
real life. Disk quotaing systems very often overcommit disk space,
because they know not all users uses their full quota of disk space.
Airlines sell more tickets than they have on a plane, because they know
(to a very high accuracy, thanks to statistics) that a certain
percentage of passengers won't show up.
There are very few guarantees in this life. The power might fail; just
because you an exec a program doesn't mean that it's guaranteed to run.
A meteorite the size of Mexico might land in the ocean, and wipe out all
life in North America..... (and sadly enough, even though your program
successfully called malloc(), it wouldn't be able to use the memory
space because all of the U.S. will have been engulfed in a fireball. :-)
Seriously, people are really stressing out over something that's really
not a problem. This is true for two reasons. First of all, in most
cases the excess memory simply isn't needed. For example, when a 32
megabyte emacs process forks and execs a 10k movemail process, 100 times
out of 100 it won't need the extra 32 megabytes of memory to be
committed during the swap. It's not like the emacs program will
suddenly say, "Gee! I'm not going to exec the 10k movemail program this
time; instead I'm going to touch every single one of my copy-on-write
pages, and force the system to give me lots of memory." Programs are
in fact deterministic, and if look at their access patterns, they very
often simply don't need all of the memory that you would need to commit
in a hard commit system.
Secondly, Linux also doesn't work the way most Unix system works, in
that read-only text pages don't require swap space. So, in case of a
memory shortage, read-only text pages can always be discarded, and then
swapped back in from the program executable image on the disk when
needed. So it is extremely rare that Linux wouldn't be able to find a
page for a program, because it will start throwing out executable pages
first.
What *will* happen in cases of extreme memory shortage is that the
machine will start thrashing very badly and slow down more and more, as
more and more code pages are thrown out and immediately paged back in as
the program tries to get work done.
However, this degredation happens at the point where your system is
pretty much useless because you've overcommitted it anyway. In real
life, you generally know in advance if this is a risk, because program's
memory usage patterns are generally very well-defined.
One can imagine systems where processes are killed by the kernel when
you run out of memory. Many Unix systems in fact do play games like
this, including the original BSD systems, and even IBM's AIX will kill
processes when memory is tight.
- Ted