Re: Bugs and wishes in memory management area

Jean Francois Martinez (jfm@sidney.remcomp.fr)
Mon, 25 Nov 1996 23:05:40 +0100


Date: Sat, 23 Nov 1996 02:30:56 +0000 (GMT)
From: Gerard Roudier <groudier@club-internet.fr>
X-Sender: groudier@localhost
cc: jon@gte.esi.us.es, rriggs@tesser.com, alan@lxorguk.ukuu.org.uk,
linux-kernel@vger.rutgers.edu
MIME-Version: 1.0
Content-Type: TEXT/PLAIN; charset=US-ASCII

On Fri, 22 Nov 1996, Jean Francois Martinez wrote:

> Date: Fri, 22 Nov 1996 01:07:29 +0000 (GMT)
> From: Gerard Roudier <groudier@club-internet.fr>
>
> 4) Suggestions:
> A suggestion, for > 16MB systems is to garbage if necessary up to 1MB
> in a pool for ISA/DMA but to keep the kernel simple.
> Kernel modules/drivers that allocate ISA/DMA memory when it is not
> needed are to be reworked (or rewritten).
>
> 5) Greetings:
>
> Gerard.
>
> Interesting idea. But there are still boxes with less than 16 Meg.
> Ah the good old days where Linux was usable with 2 megs and was
> advertised as being as being small and mean.

At work, we use Linux/X/GNU as development system (about 15 linux stations).
2 megs is probably just enough to load the kernel and kernel is not pageable.
That's just modules are made for: having a small nucleus and loading
additional services when needed.

Linux-2 + Xserver + fvwm (95) + severall xterm + bash + xemacs + gcc +
some application processes = more than 16 Mb.

A serious ftp server or web server probably use more than 64 MB.

In my flameable opinion, a usable system nowaday needs more than 16 MB, or
at least 16 MB, regardless the O/S it is running.

The spirit of LINUX is trying to make an EFFICIENT OS. I quote Linus
here: "LINUX is so fast it can do infinite loops, on less than 5
seconds". I suppose it is still less in an Alpha. :-) Allocating 1
Meg of memory just to have it at hand if we need to do ISA DMA is the
kind of corner cutting who leads to needing 128 Meg just to run
"Hello, world\n", on commercial Unixes.

People who use old machine can run Linux with less than 16 MB, but they
must be very carefull with programs being actually loaded.
For those systems ISA/DMA is possible with all the memory space.
So ISA/DMA complexity is off topic.

I have 8 Megs, an ISA box and I am having this problem. Linux 2.0
must cope with 1996 machines. That includes ISA. And machines bought
in past years do not have 64 megs. The problem of unreliable
module loading cannot wait until 2.2. And people with 8 Megs run X and
GNU Emacs, it is the pager's job to make it possible. Just a bit slow
when starting.

Now imagine a system with 128 MB of RAM and some controller(s) that use(s)
ISA/DMA and:
- Some processes have locked memory.
- Some IOs are being processed. Corresponding memory is locked.

(Locked memory CANNOT be swapped).

Imagine now that one load a module that requests (rightly or wronly)
ISA/DMAable memory and:
- Unfortunately locked pages just preclude this allocation.

What kind of pages would be locked for ages? When are loading a module
we can wait a second until IO has ended and we can unlock.

Perhaps there will be unsolvable situations on big machines but 99% of
the loadings I have had could have been solved with either:
-reconsidering the priority of the allocation.
-Discarding clean pages of processes (that is we do not need to do an I/O
to page them out).

Managing such situation may complexify the kernel too much.
In this case, garbaging 1 MB of memory at startup for ISA/DMA need is
probably a good solution.

I disagree. I think this would make Linux having bad fat. In french:
"Mauvaise graisse".

On the other hand, being given that 128/16=8, 87% IOs will need to be
double-buffered. Doing DMA in that conditions is in my opinion pretty
stupid.

get_free_pages() must be improved in order to allow modules and drivers
to not fail stupidly when memory is available, however:

- a module that use GFP_ATOMIC (not waitable or under interrupt) must be
able to recover allocation failures. If it is not able, it is bogus.
That does not depend on the O/S being involved. Same semantic exists under
some other systems.

- For other priorities that allow to swap or free memory, Linux should
allow to request memory with option to wait for availability and never
fail if requested size is reasonnable.

In the later I agree. Memory allocations has to be reconsidered. And
if wa cannot swap we should at least be able to free memory.

ISA/DMA and memory allocations priorities are 2 different things.
Mixing both is just confusing.

Gerard.

ISA DMA is slow, but using CPU for doing the transfer makes it
unavailable for the processes. Not a problem with DO$/Winsuck. But
by thinking about it a 128 Megs box will have to use the CPU sooner or
later to move the data to the process (87% of time past 16 Megs IF
your memory is full). It does not make sense to first DMA and then
copying above 16 Megs. Better not using DMA at all if the final
destination is above 16 megs.

The more I think in it, the more I want to move to non-Intel Linuxes.

-- 

Jean Francois Martinez

-What is the operating system used by Bill Gates? -Linux of course. Do you think stupid people become millionnaire?