On 21 Mar 2000 21:29:40 +0100, you wrote:
>Den 20-Mar-00 14:19:28 skrev James Sutherland følgende om "Re: Overcommitable memory??":
>> On 20 Mar 2000 1:8:46 +0100, you wrote:
>
>>> Actually, just having overcommitment turned off will itself tend to keep
>>>a little memory free for someone to log in and sort out the situation.
>>>Let's say the system is down to 500 kB of free memory (RAM+swap) and a
>>>program requests 600 kB. Now, without overcommitment of memory, the
> ^^^^^^^^^^^^^^^^^^^^^^^^
>>>allocation will fail and the system will still have 500 kB of free memory
> ^^^^^^^^^^^^^^^^^^^^
>>>left, all your daemons will still be running, etc.
>
>> No; malloc() still fails, because the 600K allocation exceeds free VM.
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>
> Why do we suddenly agree about this part? A misunderstading?
My point is, malloc() fails with or without overcommit. (Unless you
have used /proc to set "suicide mode" overcommit on - disabling ALL
sanity checking on malloc().)
Overcommit only comes into play here if TWO (or more) processes
allocate address space exceeding free memory, WITHOUT using it. This
will then cause problems if BOTH processes then use this memory.
(Without overcommit, at least one process has failed by now.)
>[cut]
>>>2) An application accesses a part of its allocated address space and the
>>>kernel can't handle it because all RAM and swap has been used. This can
>>>only happen when the kernel has overcommitted memory. Short of suspending
>>>the currently running process until memory is freed, there isn't much the
>>>kernel can do except to kill a process and free memory that way.
>
>> Why had the process allocated unused memory anyway?
>
> Like, because it never got a chance to use it after allocating it?
Never got a chance? It would have to have left the memory unused for
long enough for another process to allocate AND USE memory in the
intervening period. A fork-malloc bomb might manage this, but normal
code shouldn't - the process which allocates memory first should also
USE that memory first, unless it is broken.
>> In the normal case
>> (allocate a buffer, then use it) overcommit never comes into play.
>
> Race condition. Linux is a multitasking system. Think of two or more
>processes.
Two or more processes stress-testing the memory subsystem under fault
conditions. One process fails as a result. With Rik's patch, it is
repeatable and predictable which one fails.
>> It
>> is only if you malloc() a fairly large buffer (a few K or more) and
>> then do not use it until much later that there is a potential problem.
>
> Where "much later" can be anything from a timeslice and up. Even if you
>memset() straight away, too bad Linux decided to give the CPU(s) to other
>processes. Linux is a multitasking system.
This is only a problem if the other processes allocate AND USE address
space between the first one allocating it and using it - and only then
under pre-existing fault conditions.
> (We've been through the rest a couple of times already.)
To a large extent, this is just going round in circles. Ultimately,
when a system runs out of memory, there is something wrong. Disabling
overcommit (if it were possible) would only get you into this
condition faster, so there is little to be achieved by debating it
further.
James.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/
This archive was generated by hypermail 2b29 : Fri Mar 31 2000 - 21:00:13 EST