Re: an experiment in pipe bandwidth improvement

From: Gregory P. Smith (greg@electricrain.com)
Date: Wed Jan 26 2000 - 00:40:36 EST


On Sun, Jan 09, 2000 at 07:32:30PM -0800, Zack Weinberg wrote:
> I attempted to improve the bandwidth of a pipe by enlarging the kernelside
> buffer if a writer offers more data than there's room. Currently a pipe
> write is broken up into one-page chunks. The theory was that this would
> reduce the number of context switches and system calls, and therefore improve
> performance.
>
> The results are quite surprising: bandwidth is reduced by about 100 MB/sec.

Quick summary from the rest of this thread: it sounds like the reason
this is slower is due to killing the CPUs L1 cache as well as the memory
allocation being relatively slow. That makes sense.

Without looking at the linux pipe code first (a sin, I know), does it
always copy the data twice for pipes? I recall that in FreeBSD 2.2.x it
does something similar to this: The pipe code will "simply" <evil_grin>
remap the page to the reader's target buffer if it is properly aligned
(actually, it would have to map it as the kernel pipe buffer first;
remapping it into the reader only when it actually calls read). The copy
on write bit should be set in the writer's page table; we don't want
this to act like shared memory, just minimize the number of large copies.

Presumably (hopefully) some benchmarking was done on the freebsd code
to determine at what size copy it becomes useful to do or not do this.
Fiddling with page tables is very expensive but could be cheaper than
copying 4k of data twice (into a kernel pipe buffer, and again out the
other side). On the otherhand, this would significantly increases the
complexity of the pipe code.

just curious
Greg



-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Mon Jan 31 2000 - 21:00:16 EST