James Sutherland wrote:
> >The important difference is that other processes will not cause the big
> >512MB process to die. This is the behaviour I've been trying to
> >describe.
>
> This has nothing to do withovercommit. Your process may be killed for
> a variety of reasons (system low on resources, hostile sysadmin,
> system reboot, etc.) but nothing to do with overcommit.
>
> This is true whether overcommit is enabled or not.
Actually, I've changed my mind. If a process manages to reserve memory -
by non-overcommit, or overcommit and touching all pages - then it cannot
be killed by an out of memory condition. Unless it performs a syscall
which runs out of memory. But you can avoid those, or make allowances.
And yes, you CAN make allowances such as freeing some memory (in my
case, I can free some of the mp3 player's cache up as an example).
But which is nicer:
- Program knows if there's enough memory because it gets SIGBUS.
- Program knows if there's enough memory because it gets ENOMEM.
Why do I feel like we're going in circles. Perhaps the easiest option
would be to add a MAP_RESERVED flag to mmap? That seems like the only
atomic way of allocating and reserving memory.
-- John Ripley, empeg Ltd. http://www.empeg.com- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.rutgers.edu Please read the FAQ at http://www.tux.org/lkml/
This archive was generated by hypermail 2b29 : Thu Mar 23 2000 - 21:00:32 EST