Re: analysis of swap performance

Jakob Borg (jb@k2.lund.se)
Sun, 29 Mar 1998 15:13:00 +0000


On Sun, Mar 29, 1998 at 04:32:15AM -0800, George Bonser wrote:
>
> Is this a "memory defrag" attempt by swapping the pages out and then
> swapping them back in with the intent that it will get rid of "holes"? If
> so, that might not be the right approach. It might be better (but more
> expensive code-writing wise) to copy the pages down as holes develop. I
> have a feeling that the swap out-in is an expediant to defrag RAM that
> does not work well as RAM gets full and stuff is getting swapped in/out
> nearly continuously but I am just guessing here.

I get the feeling it gets swapped _out_ but not in again within a reasonable time period. I am no seasoned kernel hacker and not very knowledgable in the vmm stuff, but to me the concept of wanting to have free memory available for a program starting up is good, but what happens here is that _far_too_much_ code/data is being swapped out. If I sit and write a document in xemacs for an hour and then switch workspace to an xdvi it shouldn't take ten seconds before xdvi is alive because it has to swap in everything. Sure, the program is "idle" and therefore a candidate for swapping out, but unless there is a good reason for doing so I thing it should be left in memory. Having 50 mb free RAM just in case I would want to start a big program seems senseless.

It downs the responsiveness of the system enormously, completely out of proportion to the good it does.

One thing that intuitively strikes me (that might well be wrong) is that there should be some limit in percents of total ram that gets swapped out for this purpose. Or a _lower_ such limit. I think 75% free must be unnecessary on most systems. Perhaps a struct somwhere in /proc could be used to adjust the behavior?

-- 
Jakob Borg <jb@k2.lund.se>
Finger for PGP key or info

- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.rutgers.edu