Re: [PATCH] per-zone^Wnode kswapd process

From: William Lee Irwin III (wli@holomorphy.com)
Date: Fri Sep 13 2002 - 19:12:35 EST


William Lee Irwin III wrote:
>> From 64 parallel tiobench 64's (higher counts livelock in fork() etc.):
>> 38 root 15 0 0 0 0 RW 23.0 0.0 1:11 kswapd0
>> 4779 wli 22 0 4460 3588 1648 R 17.9 0.0 0:16 top
>> ...
>> 4779 wli 25 0 4460 3592 1648 R 14.1 0.0 0:27 top
>> 38 root 15 0 0 0 0 DW 3.5 0.0 1:31 kswapd0

On Fri, Sep 13, 2002 at 05:02:56PM -0700, Andrew Morton wrote:
> Why do I see only one kswapd here?
> Are you claiming an overall 4x improvement, or what?
> I'll add some instrumentation whch tells us how many pages
> kswapd is reclaiming versus direct reclaim.

I can catch the others running if I refresh more often:

   38 root 15 0 0 0 0 DW 4.8 0.0 1:57 kswapd0
   36 root 15 0 0 0 0 SW 2.7 0.0 0:16 kswapd2

 4779 wli 22 0 4476 3604 1648 R 9.2 0.0 0:58 top
   37 root 15 0 0 0 0 SW 2.6 0.0 0:16 kswapd1

   38 root 15 0 0 0 0 DW 2.9 0.0 2:12 kswapd0
   36 root 15 0 0 0 0 SW 1.8 0.0 0:22 kswapd2

 4779 wli 25 0 4476 3600 1648 R 7.4 0.0 1:18 top
   37 root 15 0 0 0 0 SW 2.7 0.0 0:21 kswapd1

 4779 wli 24 0 4476 3600 1648 R 37.5 0.0 1:49 top
   37 root 16 0 0 0 0 RW 11.1 0.0 0:23 kswapd1

 4779 wli 25 0 4476 3600 1648 R 14.1 0.0 1:51 top
   35 root 15 0 0 0 0 SW 6.9 0.0 0:24 kswapd3

   38 root 15 0 0 0 0 RW 2.9 0.0 2:29 kswapd0
   37 root 16 0 0 0 0 SW 1.4 0.0 0:28 kswapd1

etc.

Not sure about baselines. I'm happier because there's more cpu
utilization. kswapd0 is relatively busy so the other ones take some
load off of it. The benchmark isn't quite done yet. I think four
dbench 512's in parallel might be easier to extract results from.
tiobench also looks like it's getting some cpu:

  procs memory io system cpu
r b w swpd free buff cache bi bo in cs us sy id
7 3649 2 0 12516 13460 15365328 32 11209 1349 3334 3 8 90
0 3474 3 0 11216 9648 15354584 760 10333 1360 3089 2 10 88
7 3222 3 0 11872 10744 15367528 3883 8841 1371 2362 1 8 91
1 2958 2 0 12572 10304 15373820 569 10617 1347 2214 2 8 90

Cheers,
Bill
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Sun Sep 15 2002 - 22:00:35 EST