On Mon, 14 Jun 1999, Andrea Arcangeli wrote:
> Then flushpage may start all the I/O before going to sleep, it may be far
> more smart simply starting I/O for _all_ dirty buffers and then going to
> wait for I/O completation. [...]
this makes a difference only if blocksize != PAGE_SIZE. We have mainly
concentrated on making the 4k blocksize stuff go like smoke - 1k can and
will be speeded up later on, but it will probably never perform as well as
4k ext2fs.
some read() scalability numbers on a 4-CPU box, using 4k ext2fs:
1 process reading a 60k file:
READ: 270.75 MB/sec
2 processes reading the same 60k file:
READ: 258.19 MB/sec
READ: 258.12 MB/sec
3 processes reading the same 60k file:
READ: 256.52 MB/sec
READ: 256.54 MB/sec
READ: 256.57 MB/sec
4 processes reading the same 60k file:
READ: 252.95 MB/sec
READ: 252.93 MB/sec
READ: 252.76 MB/sec
READ: 252.87 MB/sec
we scale almost perfectly.
as a comparison, the very same read() workload done on the very same
user-space buffer, simulated via memcpy:
MEMCPY: 277.73 MB/sec
MEMCPY: 277.68 MB/sec
MEMCPY: 277.72 MB/sec
MEMCPY: 277.65 MB/sec
this shows that we are quite close to the physical limit already. And this
is only the beginning ;)
-- mingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/