i've implemented a read-ahead algorithm for mmap, and it appears to be
working pretty well for applications that mmap a file then stream data
from it (like mpg123). now i have some "implementation detail" questions
for the list. in no particular order:
+ read-ahead is triggered half a cluster before the end of the
previously read segment. is this too far in advance?
+ after scheduling the next window, should filemap_nopage run the
disk queue, like do_generic_file_readahead?
+ should the mmap read-ahead logic reuse the read-ahead context
contained in the file struct, or should it maintain separate
context in the vm_area struct?
+ should this logic be overly concerned about protecting the
read-ahead context from race conditions? a race condition
is more probable here than in do_generic_file_readahead
because this code handles shared memory areas.
+ what's a reasonable maximum window size? right now i've set it
arbitrarily at 256K. would it be worth it to allow up to a megabyte
per read-ahead? or maybe the maximum value should be parametrized
to the size of physical memory, just like page_cluster?
- Chuck Lever
-- corporate: <chuckl@netscape.com> personal: <chucklever@netscape.net> or <cel@monkey.org>The Linux Scalability project: http://www.citi.umich.edu/projects/linux-scalability/
- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.rutgers.edu Please read the FAQ at http://www.tux.org/lkml/