Re: Overcommitable memory??

From: James Sutherland (jas88@cam.ac.uk)
Date: Mon Mar 20 2000 - 15:53:23 EST


On Mon, 20 Mar 2000 09:36:56 -0600 (CST), you wrote:
>James Sutherland <jas88@cam.ac.uk>:
>> On Sun, 19 Mar 2000 19:59:33 -0800 (PST), you wrote:
>>
>> >On 19 Mar 2000, Rask Ingemann Lambertsen wrote:
>> >>Den 16-Mar-00 20:18:09 David Whysong wrote:
>> >>> On 15 Mar 2000, Rask Ingemann Lambertsen wrote:
>> >>>>Den 14-Mar-00 18:32:49 Rik van Riel wrote:
>> >
>> >>>>> Not really. Without overcommit you may still have random program
>> >>>>> crashes and lost work...
>> >>>>
>> >>>> Yes, really. Maybe I should have said "additional lost work" instead
>> >>>>of just "lost work". Without overcommit, program crashes will only
>> >>>>happen due software bugs or hardware problems.
>> >> ^^^^^^^^^^^^^^^^^
>> >>
>> >>> Not true. With no overcommit, you can still crash programs due to OOM
>> >>> situations.
>> >>
>> >> Sure, not all programs are bug free. But that's a whole lot different
>> >>from having the kernel kill processes just because the kernel fails to
>> >>perform basic bookkeeping.
>>
>> It is NOT "failing to perform basic bookkeeping". It is simply
>> deferring memory allocation until the memory is used.
>
>Ummm - yes it is. If the "deferring memory allocation" means that someone else
>can use the resources instead then the bookkeeping is not tracking the
>allocated (but not yet used) memory. It HAS been allocated to the first
>process. If it also allocated to another process then you a double dipping
>and eventually that will kill one, two, or all processes.

No, in most cases it causes no problems.

>> >I said crash due to OOM, not due to bugs. I never said anything about bugs.
>> >
>> >You can't avoid memory overcommit without dealing with:
>> > 1. mmap()'ed data
>> > 2. stack growth
>> > 3. kernel dynamic memory allocation
>> >
>> >...and probably a host of other things I know nothing about. You're
>> >talking about adding a tremendous amount of overhead and complexity for no
>> >real benefit.
>>
>> Yes. Disabling demand-allocation would cost you a LOT of
>> functionality.
>
>Not talking about the modification of process page tables to add a data page
>to them. We are talking about not modifing the process page tables when that
>modification causes the users total number of pages (used) exceeds the users
>quota limit.

That is another issue, nothing to do with overcommitted memory.

>> >>> Memory overcommit is here to stay. As I recall, Linux already used
>> >>> overcommit and COW when I started using it at version 0.99pl13.
>> >>
>> >> So what is /proc/sys/vm/overcommit_memory for, if not to make
>> >>overcommitment of memory an _option_?
>> >
>> >It has already been shown on this list that
>> >
>> ># echo 0 > /proc/sys/vm/overcommit_memory
>> >
>> >does not, in fact, disable memory overcommit.
>>
>> You CAN, however, disable overcommit (i.e. deferred allocation of VM)
>> on a per-process basis, simply by touching memory when you allocate
>> it. Hey presto! You have non-overcommitted VM for your process.
>
>That still doesn't disable it. The OOM condition still occurs.

It will ALWAYS be possible. Disabling, enabling or eating overcommit
has no bearing on that.

>> >>> Get used to it, or find another OS...
>> >>
>> >> You're such a good Linux advocate...
>> >
>> >Hey, this is core stuff that has been in the kernel for a very long time.
>> >It's not going to change, and it shouldn't be changed.
>>
>> Agreed.
>
>It should change when something better is available. The techniques are
>available. They should be used.

So write it. Otherwise, it's not going to be written for quite a long
time. It still has nothing to do with overcommit.

>Overcommit is usefull on single user workstations where it has been
>determined that its use is proper.

And more useful on multiuser systems.

>Overcommit is VERY BAD when used where significant damage ensues when a
>failure occurs. Multi-user servers and embeded systems are two such areas
>where overcommit causes serious problems.

It does not cause failure in ANY circumstance where no failure would
have resulted anyway. It either changes or avoids the failure mode,
that's all.

>Now for a slightly odd set of possiblities:
>
>1. Which is more reliable a) Linux b) NT ?
Linux, but I wouldn't depend on either for any mission critical
system.

>2. Which would you want firing missiles at you?
If you succeed in firing, you'll hit the wrong target anyway, so I
don't mind - unless I'm on your side, in which case I should hide
behind your target... :-)

>3. What happens if the ship goes OOM ocassionaly?
You should have backups which take over anyway.

>Guess the OS used for the ship?
If you use either for anything critical, I don't want to get within
firing range of your ship.

James.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Thu Mar 23 2000 - 21:00:30 EST