-vermont@gate.net, mongoloid programmer
On Wed, 19 Feb 1997, John Wyszynski wrote:
>
> SUICIDE! No wonder Linux gets such a rap for being unreliable. (If this is
> truely how things work. Someone please tell me this isn't so.)
> As a programmer I expect that when I have sucessfully requested memory to
> be allocated, that it really has happened. It now appears that on top of
> everything else that writing to memory at the wrong time, I could run of
> virtual memory.
>
> > > It would seem prudent to at least track the amount of virtual memory
> > > that has been committed and not allow that figure to exceed the amount
> > > available (say the sum of the phys ram and swap space). In fact, I
> > > thought this is what was being done.
> >
> > That's evil. The system at my desk is a very capable Linux
> > system, P200, 32Mb RAM and 130Mb swap. Many are not so capable, 16Mb RAM,
> > 32Mb swap. Without overcommitment, these systems wouldn't be nearly as
> > useful as they are with it. If a process consuming 16Mb of virtual memory
> > forks, you'd have to have 16 more megs available or fail the fork. :(
>
> Evil? As much as blade gaurds on a chainsaw.
>
> This scheme would seem to be even more hazardous for these machines. They
> would
> probably suffer even more "random" failures. It's even possible that a 16Mb
> process
> that forks is going to need it's own 16Mb, and will just fail at some random
> point
> in the future. I'm having a real hard time understanding how such a unreliable
> ting can be "useful."
>
> > > BSD does something similar to this (though not all that well) in that
> > > all memory allocations have their swap space allocated at request time.
> > > Any request for which swap space cannot be assigned is failed. This
> > > is efficient speed-wise, but very inneficient in terms of resources,
> > > as it does not allow for a system with less swap space than RAM to use
> > > all of it's RAM.
> >
> > I'm not sure I understand either the logic or wisdom of doing
> > that, but in any event, since stacks always grow dynamically, you could
> > never make a Linux system guarantee that memory is available.
>
> Every other UNIX I know of, and for that matter non-UNIX, do this in some
> manner.
> The BSD scheme is/was inefficent as memory sizes have grown, but it was
> designed
> when few machines in the world had as much as 100 Kbytes of memory. The
> semantics
> of stack overflows are fairly easy to predict and most UNIX system have
> mechanisms
> in place to handle these in a reasonable manner.
>
> > > It would seem to me to be fairly simple and inexpensive to simply keep
> > > track of the current total commitment for each process, and a sum for
> > > the system, and fail any allocation that pushes the system into an
> > > overcommited state. This is not foolproof of course, eg if swap space
> > > is removed from the system, then you could end up overcommitted, but
> > > it seems to me that we would want a system that is running out of virtual
> > > memory to fail gracefully, by failing allocation requests, rather than
> > > having it fail in some other fashion, say by getting seg faults in
> > > processes that are accessing memory that has been allocated to them.
> >
> > Oh, I disagree -- on behalf of all the people who don't have 128Mb
> > of RAM and 256Mb of swap. You don't realize how high the total
> > (theoretical) commitment of a typical system is.
>
> People who own Yugo's should expect to be able to win the Dayton 500 either.
> The
> best you can hope for is that you don't get killed when the engine blows.