Re: 2.1.130 mem usage.

Andrea Arcangeli (andrea@e-mind.com)
Tue, 1 Dec 1998 18:12:05 +0100 (CET)


I read now the latest mm changes from you, Stephen. So now we have only 1
bit to do page aging and we have an unused field in the mem_map_t.
Logically 32 bit can give us more info than one bit. Instead of waste 32
bit and use 1 bit, why we don' t drop the bit and use the 32 bit instead?
I can agree that having both PG_referenced in ->flags and ->age it' s not
a simple and clean approch. I can agree to use only 1 bit but sure I don'
t want the wasting the `unused' field in the mem_map_t ;).

On Mon, 30 Nov 1998, Stephen C. Tweedie wrote:

>@@ -214,7 +214,15 @@
> if (shrink_one_page(page, gfp_mask))
> return 1;
> count_max--;
>- if (page->inode || page->buffers)
>+ /*
>+ * If the page we looked at was recyclable but we didn't
>+ * reclaim it (presumably due to PG_referenced), don't
>+ * count it as scanned. This way, the more referenced
>+ * page cache pages we encounter, the more rapidly we
>+ * will age them.
>+ */
>+ if (atomic_read(&page->count) != 1 ||
>+ (!page->inode && !page->buffers))
> count_min--;

I don' t think count_min should count the number of tries on pages we have
no chance to free. It should be the opposite according to me.

I think that we should decrease count_min if:

((page->inode || page->buffers) && atomic_read(->count) == 1)

is true instead. I am going to do this in my kernel now.

2.1.129 does also this (that will cause shrink_mmap() to be more light):

@@ -212,8 +207,8 @@
struct page * page;
int count_max, count_min;

- count_max = (limit<<2) >> (priority>>1);
- count_min = (limit<<2) >> (priority);
+ count_max = (limit<<1) >> (priority>>1);
+ count_min = (limit<<1) >> (priority);

page = mem_map + clock;
do {

2.1.130 does also this:

@@ -188,7 +180,7 @@
* asynchronously. That's no problem, shrink_mmap() can
* correctly clean up the occassional unshared page
* which gets left behind in the swap cache. */
- free_page_and_swap_cache(page);
+ free_page(page);
return 1; /* we slept: the process may not exist any
more */
}

Doing this we are not swapping out really I think, because the page now is
also on the hd, but it' s still in memory and so shrink_mmap() will have
the double of the work to do.

@@ -218,7 +210,7 @@
flush_cache_page(vma, address);
pte_clear(page_table);
flush_tlb_page(vma, address);
- entry = page_unuse(page_map);
+ entry = (atomic_read(&page_map->count) == 1);
__free_page(page_map);
return entry;
}

This will cause the double of work to shrink_mmap() too I think.

I' ll try to reverse these patches right now in my own tree.

Andrea Arcangeli

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/