Re: Swapping in 2.1.103?

Jim Wilcoxson (jim@meritnet.com)
Thu, 21 May 1998 13:10:42 -0700


I'm no Linux kernel guru, but it is very likely that there are parts of X
and Netscape that may be executed once (so the pages come into memory after
mapping the executable), but then are not used again. It wouldn't make
sense to effectively "lock" these pages into memory and not make them
available to the buffer cache just because they belong to an executable.

However, having said, that, it also doesn't make sense that doing a tar of
a filesystem should invalidate the entire buffer cache PLUS page out
application data.

An older OS I'm familiar with (Primos, from Prime Computer), distinguished
between sequential and random access files and never used more than 1
buffer for sequential files. This is hard(er) in Unix because there is no
distinction, but perhaps there could be something like "if a file has never
been repositioned, then after it is closed, mark its file buffers so that
they will be re-used before paging out non-buffer-cache pages". Also,
sequentially accessing a large file shouldn't wipe out the buffer cache.
If a file has never been positioned while reading/writing, there is a good
chance that the data in the buffer cache, except for the page where the
file pointer is (or future pages for read-ahead) will not be needed in the
near future. In this case, the number of buffers allocated to the large
file should be bounded somehow, maybe to just a few buffers, and even these
would be marked "highly available" after the file is closed. This of couse
wouldn't apply to directory buffers, file indirect buffers, ...

This way, if files like databases are being randomly accessed and someone
sequentially accesses a large file or a bunch of files, it won't throw out
all of the database pages.

The buffer cache should grow to fill available memory, it should even grow
to cause executable data/code to be paged out, but only when there is a
high likelihood that the pages already in the buffer cache will be used
again in the near future.

Jim

(Former kernel hacker on a now-defunct operating system that was ahead of
its time...)

At 06:51 PM 5/21/98 +0200, Karl GŁnterWŁnsch wrote:
>Hello,
>
>I have one concern with the latest development kernels and that concern
>is swapping.
>(I am going to check this on a 2.1.88/2.1.40 if this helps, but that
>really depends on time at hand which is a scarce resource at the moment)
>
>OK. Lets describe the setup:
>
>Freshly booted system. 128 Mb of RAM and really only 25 MB used at the
>time of the test (mostly a netscape and X11). The test:
> tar cf - . | cat >/dev/null
>
>In my opinion this test shouldn't force any applications to be swapped
>(because the only thing really used by this test is the buffer cache,
>which is supposed to take up only free memory). Well the end of the
>story is, that I end up with more than 10 Mb of swap space used, and 11
>Mb of free memory. Why is buffering preferred to keeping applications in
>memory?
>
> regards
> Karl GŁnter WŁnsch
>
>-
>To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
>the body of a message to majordomo@vger.rutgers.edu
>

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu