Re: broken VM in 2.4.10-pre9

From: Linus Torvalds (torvalds@transmeta.com)
Date: Mon Sep 17 2001 - 11:34:38 EST


On Mon, 17 Sep 2001, Jan Harkes wrote:

> On Mon, Sep 17, 2001 at 02:33:12PM +0200, Daniel Phillips wrote:
> > The inactive queues have always had both mapped and unmapped pages on
> > them. The reason for unmapping a swap cache page page when putting it
>
> So the following code in refill_inactive_scan only exists in my
> imagination?
>
> if (page_count(page) <= (page->buffers ? 2 : 1)) {
> deactivate_page_nolock(page);

No, but I agree with Daniel that it's wrong.

The reason it exists there is because the current inactive_clean list
scanning doesn't have any pressure into VM scanning, so if we'd let mapped
pages on the inactive queue, then reclaim_page() would be unhappy about
them.

That can be solved several ways:
 - like we do now. Hackish and wrong, but kind-of-works.
 - make reclaim_page() have the ability to do vm scanning pressure (ie if
   it starts noticing that there are too many mapped pages on the reclaim
   list, it should cause VM scan)
 - physical maps

Actually, now that I look at it, the lack of de-activation actually hurts
page_launder() - which doesn't get to launder pages that are still mapped
(even though getting rid of buffers from them would almost certainly be
good under memory pressure).

                Linus

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Sun Sep 23 2001 - 21:00:20 EST