Re: [PATCH 19/21] swap_prefetch: Conversion of nr_unstable to ZVC

From: Christoph Lameter
Date: Mon Jun 12 2006 - 20:07:18 EST


On Tue, 13 Jun 2006, Con Kolivas wrote:

> The comment should read something like:

If we need another round then maybe it would be best if you would do that
patch.

This?

Subject: swap_prefetch: conversion of nr_unstable to per zone counter
From: Christoph Lameter <clameter@xxxxxxx>

The determination of the vm state is now not that expensive
anymore after we remove the use of the page state.
Change the logic to avoid the expensive checks.

Signed-off-by: Christoph Lameter <clameter@xxxxxxx>
Signed-off-by: Con Kolivas <kernel@xxxxxxxxxxx>

Index: linux-2.6.17-rc6-cl/mm/swap_prefetch.c
===================================================================
--- linux-2.6.17-rc6-cl.orig/mm/swap_prefetch.c 2006-06-12 13:37:47.283159568 -0700
+++ linux-2.6.17-rc6-cl/mm/swap_prefetch.c 2006-06-12 17:06:44.875945042 -0700
@@ -298,7 +298,7 @@ static int prefetch_suitable(void)
{
unsigned long limit;
struct zone *z;
- int node, ret = 0, test_pagestate = 0;
+ int node, ret = 0;

/* Purposefully racy */
if (test_bit(0, &swapped.busy)) {
@@ -307,17 +307,15 @@ static int prefetch_suitable(void)
}

/*
- * get_page_state and above_background_load are expensive so we only
- * perform them every SWAP_CLUSTER_MAX prefetched_pages.
+ * above_background_load() is expensive so we only perform
+ * it every SWAP_CLUSTER_MAX prefetched_pages.
* We test to see if we're above_background_load as disk activity
* even at low priority can cause interrupt induced scheduling
* latencies.
*/
- if (!(sp_stat.prefetched_pages % SWAP_CLUSTER_MAX)) {
- if (above_background_load())
+ if ((!(sp_stat.prefetched_pages % SWAP_CLUSTER_MAX)) &&
+ above_background_load())
goto out;
- test_pagestate = 1;
- }

clear_current_prefetch_free();

@@ -357,7 +355,6 @@ static int prefetch_suitable(void)
*/
for_each_node_mask(node, sp_stat.prefetch_nodes) {
struct node_stats *ns = &sp_stat.node[node];
- struct page_state ps;

/*
* We check to see that pages are not being allocated
@@ -375,11 +372,6 @@ static int prefetch_suitable(void)
} else
ns->last_free = ns->current_free;

- if (!test_pagestate)
- continue;
-
- get_page_state_node(&ps, node);
-
/* We shouldn't prefetch when we are doing writeback */
if (node_page_state(node, NR_WRITEBACK)) {
node_clear(node, sp_stat.prefetch_nodes);
@@ -394,7 +386,8 @@ static int prefetch_suitable(void)
node_page_state(node, NR_ANON) +
node_page_state(node, NR_SLAB) +
node_page_state(node, NR_DIRTY) +
- ps.nr_unstable + total_swapcache_pages;
+ node_page_state(node, NR_UNSTABLE) +
+ total_swapcache_pages;
if (limit > ns->prefetch_watermark) {
node_clear(node, sp_stat.prefetch_nodes);
continue;
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/