I think that it is reasonable to expect programs that need to be reliable to
trap SIG_SEGV and exit (somewhat) gracefully. If they do, how different is
that from malloc returning a 0? If a program can't get the memory it needs,
chances are that it will exit. If it is really important, dirty the pages.
As for fork, while a major cause of memory allocation, not the only one.
With the linux copy on write, I can malloc a big chunk of memory and use it
as needed. I can also mmap a 400 MB file and only use memory for the pages
I write to.
>It may be the explanation why in the last few years I have seen so many
>programs die for no cause in the middle of the day. (On non-Linux systems
>so far.) In an operational environment, such havoc is not appreciated.
If you run out of memory, you are going to have havoc one way or another.
Unless the system was very heavily loaded, this is doubtful. If it was
loaded so heavily, something was going to go wrong, anyway.
Would a /proc twiddle for the algorithm to determine if enough space is free
satisfy you? Someone suggested that.
===
Evan Jeffrey
ejeffrey@eliot82.wustl.edu
Just once, I wish we would encounter an alien menace that wasn't
immune to bullets.
-- The Brigadier, "Dr. Who"