Re: Avoiding OOM on overcommit...?

From: James Sutherland (jas88@cam.ac.uk)
Date: Thu Mar 30 2000 - 13:56:07 EST


On Wed, 29 Mar 2000 19:48:02 +0200 (MET DST), you wrote:

>"A month of sundays ago James Sutherland wrote:"
>> On Sun, 26 Mar 2000 17:50:37 +0200 (MET DST), you wrote:
>> >And I arrived later too. But while we're on the subject of swap space,
>> >doesn't "reserve me 8MB of disk-based swap as backing for my stack" cure
>> >everyones OOM blues? I propose that that's a fair use for swap nowadays.
>> >I don't _actually_ want to use swap myself, and having it there only
>> >as the "gold-standard" at the back of the IOU seems the best use.
>>
>> On the contrary - reserving an extra 8Mb of swap per process would
>
>I replied to your privately already. I don't know how you get this
>idea. It's easy to show that no process can go OOM on stack if every
>process has backing reserved to take its stack pages when they have
>to be taken out of ram for whatever reason.

Of course it can't run out of memory using memory which has already
been allocated.

>> CAUSE further OOM problems. You would still run OOM, it would just
>
>No. Try again. If that's the best you can do, your whole argument till
>now has been specious.

You are increasing the size of processes. This reduces the level of
free memory under any given workload. This makes running out entirely
a more likely and frequent occurrence.

>> happen sooner than it otherwise would.
>
>Perhaps you are confusing "not being able to start more processes" with
>"an existing process getting an unexpected signal due to an
>unobservable system condition not of its making or ever reported to
>it". The latter is OOM.

No. "Running out of memory" is OOM. A process will only get this
signal if and when it tries to allocate memory when none is available.
A simple check on free memory levels would achieve the same, without
wasting all the memory we are running short of.

(OK, with Rik's OOM killer, the largest and youngest process will be
killed to free up memory instead.)

> It can never happen when no process can start
>unless there is swap space available to take its max stack (which
>guarantees that stack memory is never overcommitted), AND every access
>to malloc is wrappped as gnumalloc does it (which guarantess that heap
>memory can never be overcommitted).

As far as stack is concerned, you could just check the level of free
memory before starting a new process.

If you really want this behaviour in your process, just use gnumalloc
and allocate your own stack of whatever size you need. Much simpler
and more efficient.

>I posted a simple bsd-like accounting scheme + kernel policy that
>guarantees the above and makes provision for both "secure" and
>"non-secure" types of processes. You can do better than it, but it at
>least works.

Why change the kernel behaviour unnecessarily?

James.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Fri Mar 31 2000 - 21:00:27 EST