On Wed, Dec 11, 2013 at 3:05 PM, Andrew Morton
But I'm really struggling to think up an implementation! The current
code looks only at the caller's node and doesn't seem to make much
sense. Should we look at all nodes? Hard to say without prior
knowledge of where those pages will be coming from.
I really think we want to put an upper bound on the read-ahead, and
I'm not convinced we need to try to be excessively clever about it. We
also probably don't want to make it too expensive to calculate,
because afaik this ends up being called for each file we open when we
don't have pages in the page cache yet.
The current function seems reasonable on a single-node system. Let's
not kill it entirely just because it has some odd corner-case on
In fact, for all I care, I think it would be perfectly ok to just use
a truly stupid hard limit ("you can't read-ahead more than 16MB" or
What we do *not* want to allow is to have people call "readahead"
functions and basically kill the machine because you now have a
unkillable IO that is insanely big. So I'd much rather limit it too
much than too little. And on absolutely no sane IO susbsystem does it
make sense to read ahead insane amounts.
So I'd rather limit it to something stupid and small, than to not
limit things at all.
Looking at the interface, for example, the natural thing to do for the
"readahead()" system call, for example, is to just give it a size of
~0ul, and let the system limit things, becaue limiting things in useer
space is just not reasonable.
So I really do *not* think it's fine to just remove the limit entirely.