Caching problems while reading multiple large files...

From: Roy Sigurd Karlsbakk (roy@karlsbakk.net)
Date: Mon Dec 24 2001 - 07:09:31 EST


hi all

Some time ago, I sent a message regarding what I thought had to be the
RAID subsystem. I'm more convinced that this is the buffer cache messing
up.

scenario:

read 200 large files from disk concurrently (for instance dd of=file01
of=/dev/null, dd of=file02 of=/dev/null &c.).

As this is a 2x120g IDE RAID-0 config, I get pretty good throughtput,
eveny though I'm reading multiple files concurrently (40-50 megs per sec).

But...At the time I've read ~800 megs of data (which is the same amount as
the free memory before I start), it suddenly slows down to a mere 1 meg
per sec.

But... If I try doing another i/o operation, even to the same block
device, it works fine.

Can anyone help me out here?

roy

--
Roy Sigurd Karlsbakk, MCSE, MCNE, CLS, LCA

Computers are like air conditioners. They stop working when you open Windows.

- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Mon Dec 31 2001 - 21:00:09 EST