More generally this is a cacheing algorithm which tries to take into
account the cost of retrieving the page (especially if the whole block of
pages swapped out are all wanted for sequential access at some later
date.)
I've not looked at the present code, but at least the old code looked to
scrounge pages from a few processes at a time on a single pass, resulting
in data being fairly well intermixed (fragmented) on the swap drive, it
would be better if they were unfragmented as much as possible. Also this
makes read-ahead algorithms in the lower layers much more effective too.
I know ideally we would have a 'working set daemon' which would note down
which pages of data in ram are being accessed frequently, and which
aren't, maybe randomly locking individual pages out of user space, and
catching the traps that result. Or by keeping a record of how long a page
stayed on the disk the last time it was swapped out. Or by recoding the
pages which are currently running when a task switch or system call occurs.
This would be especially good for things like daemons, whose swappable
pages (init code etc.) could all be determined quite easily in a few
moments when the machine is idle, instead of the normal thrash
particularly when things start swapping (as pages go in and out rapidly
determining what is/not being currently used).
Again having this info at had would be great for swapping as the entire
unused portions of memory could be swapped out in one continuous
transfer, to one place on the swap disc.
Thanks.
.. . . . . . . . . . . . . . ..
:: : : Jon Burgess 01223-461907 : : ::
:: : jjb1003@cam.ac.uk : : ::
:: : : : : : : : : : : : : : ::