caching/swapping headaches...

Samuli Kaski (samkaski@cs.Helsinki.FI)
Fri, 10 Oct 1997 22:44:39 +0300 (EET DST)


[ Sorry, I know this is a bit off-topic what comes to kernel
developpment. Answer me with a private e-mail to avoid more bandwidth
wasting, if you know the answer. ]

I have been enjoying Linux pre9 for 27 days now but I still don't
understand why Linux has to get jerky when used for a longer period of
time. The reason for this is that Linux is, as I see it, a
demand-swapped OS, eg. if I start qwcl when I haven't got any physical
memory available, other processes and their data will be swapped out.
But when the program in question terminates there is no way to bring
those processes and their data-pages, that just got swapped out, back
into the real-Linux-performing world unless you wake them up somehow.
Some programs/daemons indeed have SIGUSR1/2 handling that will force
the process back to life, but not all of them do.

So my question is: is there a way to bring those processes back to
normal opearation when the hugely-memory-allocating-process has
finished? If not, should there be such an capability within the
kernel?

Most of time my 32MB P5-100 box sits there with 10-20MB swapped out on
disk and 5-15MB in cache and almost everything feels jerky after a
couple of uptime days. Can this be altered somehow in userspace?
(bdflush and freepages aren't the answer, correct me if I'm misstaking)

I wrote a silly program that allocates/touches several pages of
memory. The more I use this program, the jerkier my Linux feels (it
free's all it's memory on program exit). Would the dynamical adaption
to memory availability cause such an kernel MM-bottleneck that it isn't
worth realizing?

If you believe it would be of interest to the rest of the Linux
community supplying your answer to the list, go ahead and do so but if
not, please e-mail me directly. Thank you.

--
Samuli Kaski, samkaski@cs.helsinki.fi
Department of Computer Science, University of Helsinki, Finland.