Re: Absolutely horrid IDE performance...

Mark Lord (
Sat, 05 Dec 1998 15:36:18 +0000

Gerard Roudier wrote:
> Due to the way Linux caches blocks, you must use far more than the main
> memory size for the bonnie file size there if you want relevant results.
> You may want to less us know the results using a 512 MB file for
> example.

Been there, done that, no significant difference.
It's still very fast.

> And if your pair of drives together are really able of 32 MB sustained
> data rate, why is the block read result so less than this number (23
> MB/s).

Because the drives can perform writes asynchronously to the CPU,
with internal write-gathering. Remeber, these drives are exactly
the same mechanisms as on the "SCSI" versions of the same models,
and are thus capable of exactly the same internal performance.

All that is different is the external connnector and protocol.

They might even be faster if we implemented tagged-queuing
for IDE (new drives now support this for ATA as well as SCSI).

> > Looks pretty wimpy, even by IDE standards.
> 1) The Cheatah2 sustain data rate is about 18-19 MB/s and we are able to

I know that Cheatah's are fast!

> 3) CPU load is nicely low for the system used for this benchmark.

Hard to tell about that one, since all we have are percentage numbers.
To measure CPU load, one needs measures of I/O related execution time,
not percentages.

To me, a low CPU percentage means that the I/O subsystem is slow enough
that the CPU spends most of its time waiting for data. Not good.

If we had an infinitely fast drive, then CPU percentage would always
be around 100% -- no waiting. So the measurement is not useful on an
absolute scale, though could have meaning when comparing systems with
identical motherboard/cpu/memory to one another.

The qualitative results on this IDE RAID0 are exceptional.
It is easily the most fastest and most responsive system I've
ever used. Kudo's to the RAID folks!


- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to Please read the FAQ at