i'm not clear on when vma's are shared.
say separate processes are sharing a file (say, a system log file, or
/etc/passwd) and reads are sequential. the problem isn't scheduling too
many read-ahead I/O's, it's that every process reading this "shared" file
sequentially will trigger the read-ahead code. this generates a whole
bunch of page cache lookups that could be avoided. not a big deal, but it
would be nice if the logic was smart enough to avoid it -- if read-ahead
is, say, 4 clusters, that's 64 extra page cache lookups every time that
part of the file is requested by some process.
- Chuck Lever
-- corporate: <chuckl@netscape.com> personal: <chucklever@netscape.net> or <cel@monkey.org>The Linux Scalability project: http://www.citi.umich.edu/projects/linux-scalability/
- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.rutgers.edu Please read the FAQ at http://www.tux.org/lkml/