Re: Overcommitable memory??

From: Paul Jakma (paul.jakma@compaq.com)
Date: Wed Mar 15 2000 - 11:23:49 EST


On Mon, 13 Mar 2000, Michael Bacarella wrote:

> The way I see it, apps that have successfully allocated memory in the past
> wouldn't start dying since there's no malloc() to fail, wheras new apps
> that want to bring down the system will start getting failed
> malloc()/mmap()'s

no. Because a good app may have malloc()'ed memory an hour ago, and only
now try to write to it. Now the kernel had overcomitted on that
malloc(), and an hour ago things looked ok. But now when the kernel
tries to fulfill it's promise it finds it has no memory.

what does it do? the process didn't make a system call, so we can't
return to it. You can only really signal it (probably killing it),
suspend it (but that doesn't help reclaim memory) or kill something.

catch-22.

>
> There's no reason to tell an application that it has X megs of memory all
> to itself to play with, and then KILL one of it's brothers if the kernel
> finds itself short.
>

it's a choice. You can either

a) allocate swap/pages at the time of malloc()/fork() et al.
This incurs costs. Both in memory/swap usage, and in time - you need to
allocate backing store on the hard disk, you need to setup pages in
memory - all for memory that might never be used.

b) you overcommit, and try to minimise the risks.

> But on the other hand, malloc() DOES return EAGAIN. Some applications
> would think to retry malloc() in a few seconds, which may have hopes of
> succeeding.
>

but that only applies to processes that try malloc() at the point of
OOM. You still have a bunch of processes with memory they have already
malloc()'ed but havn't allocated yet.

> -MB
>

-paul jakma.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Wed Mar 15 2000 - 21:00:30 EST