Re: VM performance

Nimrod Zimerman (zimerman@deskmail.com)
Thu, 28 Jan 1999 22:40:29 +0200


On Thu, Jan 28, 1999 at 06:16:35PM +0000, Stephen C. Tweedie wrote:

> > The fact that the kernel trims the cache doesn't mean swap isn't used
> > needlessly. The cache behavior is still much worse than it used to be
> > (though in 2.2.0, it has greatly improved pre9's behavior).
.
> > It isn't fine, not at all. If you want, I can supply some vmstat reports to
> > show what I mean.
>
> Fine, but vmstat alone means nothing in this respect, because the VM can
> play a number of cunning tricks to keep the same swap page both on disk
> and in memory. A large swap usage in vmstat doesn't tell us anything
> about how fast things are moving. Wall clock time is the only reliable
> benchmark here.

It is hard to time these, as most of what I do isn't time critical (how can
I time the amount of time it takes to respond to the up arrow in mutt?).

I can clearly see (using ps, vmstat and my HD light) that mutt gets swapped
needlessly (when there is a 6mb cache) out of memory when doing something
slightly memory intensive in another console - any allocation of "a lot" of
memory would trigger this behavior.

I can't really time things, as this isn't what this is about. Interactivity
is (obviously, they are related).

> vmstat is _very_ useful at identifying why you see the observed
> performance, but I won't accept that just because vmstat doesn't say
> what it used to that the VM must be doing worse!

Well, what I'm feeling is unneeded swapping, and what I'm seeing is
something like this:
(This happens when loading something).

procs memory swap io system cpu
r b w swpd free buff cache si so bi bo in cs us sy id
2 0 0 5032 496 88 5328 3 0 519 0 1149 98 59 40 1
1 0 0 5028 488 92 5004 1 0 361 0 829 111 62 28 10
0 0 0 5024 488 92 5000 0 0 0 0 109 124 4 3 93
0 0 0 5024 488 92 5000 0 0 0 0 110 125 1 2 97
0 0 0 5024 528 116 4932 21 0 134 2 415 157 11 12 77
0 0 0 5024 528 116 4932 0 0 0 0 105 123 1 2 97
1 0 0 5008 228 112 4616 9 0 482 6 1096 167 16 15 69
2 1 0 5484 388 104 4632 136 215 381 54 1498 156 24 21 55
2 0 0 5580 480 112 4512 4 32 13 8 199 132 93 4 3
2 0 0 5616 380 120 4464 31 15 328 14 856 139 46 12 43
2 0 0 5992 160 112 4896 35 133 538 37 1506 222 15 21 64
2 0 0 6456 312 136 5116 153 183 320 46 1339 153 24 12 64
2 0 0 6896 472 104 5324 217 232 217 69 1350 179 32 18 50
3 0 0 6916 404 100 5384 211 116 237 29 1126 227 43 21 35
2 1 0 7412 172 112 5908 135 183 377 47 1430 237 32 18 50
2 2 0 7796 288 108 6056 593 435 328 111 2543 132 11 29 61
1 0 0 8000 400 104 6056 43 84 26 21 388 47 88 9 3
1 0 0 8168 480 108 6100 11 64 37 16 321 98 53 15 32
2 0 0 8168 480 108 6100 0 0 0 0 104 156 1 2 97
2 0 0 8168 480 108 6100 0 0 0 0 103 155 1 2 97
0 0 0 8168 480 108 6100 0 0 0 0 103 161 1 2 97
0 0 0 8168 432 108 6148 11 0 7 0 180 224 1 2 96

If you can suggest a way to do a more accurate metering this behavior -
please let me know.

Nimrod

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/