Re: Overcommitable Memory...

From: Jesse Pollard (pollard@tomcat.admin.navo.hpc.mil)
Date: Fri Mar 17 2000 - 14:02:11 EST


> I don't understand the whole situation.
>
> Why should you EVER be out of memory if you aren't out of swap in an OS
> that supposedly supports demand memory paging?

1. given: system has 100MB memory, + 100 MB swap.
   therefore:
        total memory = 200MB.

   problem:
        Edit a syslog (or messages file) 100MB file with emacs.
        a. Emacs loads the file into memory, allocates 100MB.
        b. Emacs generates a second window for a previous version of the
           file (also 100MB).
        c. Emacs does a fork to do a ls.
        d. An incoming FTP connection occurs

    result:
        a. emacs has a total of 200MB before the fork.
        b. the fork generates another process of 200MB.

        The OOM condition is now primed. When the FPT occurs, who gets
        aborted? (inetd? which did a fork duplicating the size of inetd,
        or the emacs process ?

> Why, in a pinch, isn't the "most idle" page(s) paged out to make room for
> the latest request and pages brought in by a page fault when they aren't
> presently mapped and are needed?

In the OOM condition there is no place to page it to.

> From my perspective, slow operation is better than processes being
> randomly killed. I've had a web server running on a machine with 96MB of RAM,
> and after several days, Linux will start randomly killing things, and
> inevitably it will clobber processes like nfsd, inetd, portmapper, etc, things
> essential to the proper operation of the system.

So, you HAVE seen the OOM condition.
>
> And then IF you really truely run out of memory including swap, at that
> point refuse additional requests for memory, including fork/exec'ing new
> processes rather than killing off existing processes.

That calls for resource allocation controls.

> Maybe it's just me but the current Linux behavior is almost as bad as the
> Xenix that used to run on Tandy 6000's, which being MC68000 based (a CPU which
> has no restart instruction) could not support demand paging (at least not
> without swapping the 68000 for a 68010 and tweaking the kernel which some
> people actually did).

Not the same thing. The 68000 was a swapping system. Page faults cannot happen.
resource allocations are a must for the system to run. Swap size = number
of permitted processes * (size of memory - size of kernel).

Anything else permitted the system to hang/deadlock. The hang/deadlock
condition is equivalent to the OOM that happens now under Linux. I used the
68020 based MVME 1000 system that still was released before the paging MMU.
I did not have a problem since I had resource controls. It would ocassionally
tell me "can't fork/exec - out of swap space". But then I knew I had to
exit my editor before running the program.

> There are a lot of areas where Linux is superior to other Unix's, but this
> is one area where Linux really sucks.

It is one of the worst ones. Even under 2.0x I get random reboots from the
system.
-------------------------------------------------------------------------
Jesse I Pollard, II
Email: pollard@navo.hpc.mil

Any opinions expressed are solely my own.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Thu Mar 23 2000 - 21:00:23 EST