Re: Overcommitable memory??

From: Khimenko Victor (khim@sch57.msk.ru)
Date: Thu Mar 23 2000 - 06:03:37 EST


In <15AD5D7936F8D21198240000F831776E0390394D-100000@dboexc1.dbo.cpqcorp.net> Paul Jakma (paul.jakma@compaq.com) wrote:
> On Thu, 16 Mar 2000, James Sutherland wrote:

>> On Wed, 15 Mar 2000, Paul Jakma wrote:

>> That doesn't happen. malloc() ALLOCATES the memory to the process. It is
>> *NOT* overcommitted. It may be backed by swapspace rather than physical
>> memory, but that block of memory *IS* available to the process.
>>

> so malloc() isn't overcommited? malloc()'ed memory is guaranteed to be
> available - ie the memory is reserved and accounted for at malloc()
> time?

He just UTTERY clueless. malloc() WILL happily overcommit HUGE amount of memory.
You'll need more then one process though (see sample below).

> (i always thought malloc()'s were noted by the kernel ie in the page
> table for the process, but that the kernel didn't actually allocate a
> real page until the process tried to access that page - ie malloc() is
> overcommited)

Exactly. Even more: if you'll READ from said page kernel will NOT allocate
it for you ! It'll count this page as RSS though.

> If so, how do we get into trouble?

>> malloc() does this. fork() doesn't, because there is no memory to
>> allocate.

> i must be misunderstanding you, cause I'm pretty sure fork() is supposed
> to copy the complete VM of the parent. :) And on older Unix's like
> ultrix you really had to have lot's of swap if you wanted to allow big
> processes to do a fork().

Exactly.

>> The whole point of fork() is that you are *NOT* simply
>> duplicating the in-VM image of the process!

> no, the whole point of overcommiting is so that we don't have to copy
> the VM of a process at fork() time, eg COW. But fork() does specify that
> the child process will have a complete copy of the parents VM.

Once again. fork() supposed to do COMPLETE copy of process. And COW is
just to speed up things, not to change semantic.

> (whether we copy it at fork() time like really ancient os's, COW with
> reserved backing store like ultrix or COW overcommit like linux is an
> implementation issue).

>> > but that only applies to processes that try malloc() at the point of
>> > OOM. You still have a bunch of processes with memory they have already
>> > malloc()'ed but havn't allocated yet.
>>
>> Nope.
>>

> really really sure about this? cause it goes completely against my
> understanding of how linux works. but i'd be very glad to be corrected
> if i'm wrong. I've honestly thought that malloc() was overcommited up
> till now.

Oh. Instead of arguing in vain just make simple check. Take small program:
-- cut --
#include <stdio.h>

int main(int argc,char *argv[]) {
    /* Allocate 100MiB of memory */
    char *p=(char *)malloc(100*1024*1024);
    /* Not just read area to make sure compiler will not optimize anything */
/*
    int x;
    for (x=0;x<100*1024*1024;x++) {
        *p+=p[x];
    }
*/
    /* wait for keypress */
    getchar();
}
-- cut --
Start 10 copies with following command
-- cut --
for i in 0 1 2 3 4 5 6 7 8 9 ; do { ./test $i & } ; done
-- cut --
And the type `ps l'. Here I got the following result:
-- cut --
[khim@localhost khim]$ ps l
 FLAGS UID PID PPID PRI NI SIZE RSS WCHAN STA TTY TIME COMMAND
   100 503 460 459 15 0 2152 1268 wait4 S vc/1 0:00 -bash
     0 503 485 460 3 0 103404 324 do_signal T vc/1 0:00 ./test 0
     0 503 486 460 4 0 103404 324 do_signal T vc/1 0:00 ./test 1
     0 503 487 460 5 0 103404 324 do_signal T vc/1 0:00 ./test 2
     0 503 488 460 5 0 103404 324 do_signal T vc/1 0:00 ./test 3
     0 503 489 460 5 0 103404 324 do_signal T vc/1 0:00 ./test 4
     0 503 490 460 8 0 103404 324 do_signal T vc/1 0:00 ./test 5
     0 503 491 460 9 0 103404 324 do_signal T vc/1 0:00 ./test 6
     0 503 492 460 9 0 103404 324 do_signal T vc/1 0:00 ./test 7
     0 503 493 460 10 0 103404 324 do_signal T vc/1 0:00 ./test 8
     0 503 494 460 10 0 103404 324 do_signal T vc/1 0:00 ./test 9
100000 503 495 460 16 0 1452 568 R vc/1 0:00 ps -l
-- cut --
and free says that:
-- cut --
             total used free shared buffers cached
Mem: 126208 58360 67848 0 3908 38688
-/+ buffers/cache: 15764 110444
Swap: 0 0 0
-- cut --
Do you really think my non-existing swap can keep all 1000MiB of allocated
memory ? If you'll add for(...) from comment to program you'll see that ALL TEN
processes will have RSS of 50-60MiB at some time (while they still running not
stopped that is) ! Total of 500-600MiB of RSS while there are only 128MiB RAM
available (swap is completely disabled for test). If it's not overcommiting then
what IS overcommiting ???

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Thu Mar 23 2000 - 21:00:38 EST