Re: Some questions about linux kernel.

From: James Sutherland (jas88@cam.ac.uk)
Date: Mon Mar 20 2000 - 14:50:35 EST


On Mon, 20 Mar 2000 12:00:52 -0400, you wrote:

>orc@pell.portland.or.us (david parsons) said:
>> In article <linux.kernel.Pine.LNX.4.10.10003171319000.3718-100000@dax.joh.cam.ac.uk>,
>> James Sutherland <jas88@cam.ac.uk> wrote:
>
>[...]
>
>> >In fact, it makes the problem worse.
>>
>> If the problem is an intruder on your system who is attempting a
>> deliberate denial of service attack, maybe. If the problem is a
>> program allocating more memory than there is in the system and
>> making a different program die because of the overcommit,
>> non-overcommit is the best solution to this feature.
>
>If one program allocates just shy of what is available, it will succeed.
>The next one the can't get the memory it needs and crashes. Exactly as in
>the overcommiting case: Innocent bystanders get shot, just earlier (or even
>much earlier) if you don't overcommit. And with a clean bullet through the
>head (malloc(3), or fork(2), fails), not by a random shot at the body
>(SIGSEGV when accessing memory that "should be there"). End result is the
>same.

*IF* there is genuinely not enough VM, then yes, both systems result
in the same outcome. If, OTOH, there IS enough, but only just,
overcommit allows some operations to succeed which would otherwise
have been impossible.

So: Under some circumstances, the ABSENCE of overcommit will cause
problems. Having overcommit cannot make things worse, and makes the
system much less resource intensive (=>cheaper).

James.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Thu Mar 23 2000 - 21:00:30 EST