Re: 2.1.110 freepages.min change

Linus Torvalds (torvalds@transmeta.com)
Wed, 22 Jul 1998 20:32:41 -0700 (PDT)


On 23 Jul 1998, Andi Kleen wrote:
> > Yes. And that was also what convinced me that we need some other mechanism
> > to get that - thus the changes in arch/i386/kernel/process.c..
>
> That still does not solve the problem with 8k NFS (which effectively
> needs 16K blocks)

True. Which is not a new problem - it' sone of the reasons the Linux NFS
client used to default to 1kB blocks.

It's probably a perfectly fine idea to have a separate pool of larger
pages, and not let single-page allocations deplete that pool at all. I
would certainly accept something like that if it was cleanly done (the
so-called "largearea" patches do this to a limited degree for other
reasons - to have DMA'ble contiguous chunks available for drivers that
need them).

For example, we could just decide that the low 128kB (random number alert)
of memory is never added to the page pool at all, but is only used by
certain allocators that have some known behaviour pattern (for example,
NFS packet allocations have known and controlled behaviour to some degree
- and there are other temporary allocations that could maybe also
benefit).

The point being that we don't _have_ to have a completely unified memory
pool. Unified pools have advantages and disadvantages, and sometimes it is
useful to be unified on one level but not on another (so temporary buffer
allocations could come from a non-unified pool, while "long-term"
allocations would come from the unified pool or something).

Linus

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.altern.org/andrebalsa/doc/lkml-faq.html