Am Montag, 27. August 2001 21:04 schrieb Daniel Phillips:
> On August 27, 2001 08:37 pm, Oliver Neukum wrote:
> > Hi,
> >
> > > - Readahead cache is naturally a fifo - new chunks of readahead
> > > are added at the head and unused readahead is (eventually)
> > > culled from the tail.
> >
> > do you really want to do this based on pages ? Should you not drop all
> > pages associated with the inode that wasn't touched for the longest
> > time ?
>
> Isn't that very much the same as dropping pages from the end of the
> readahead queue?
No, the end of the readahead queue will usually have pages from many
inodes(or perhaps it should be attached to open files as two tasks may read
different parts of one file).
If we are optimising for streaming (which readahead is made for) dropping
only one page will buy you almost nothing in seek time. You might just as
well drop them all and correct your error in one larger read if necessary.
Dropping the oldest page is possibly the worst you can do, as you will need
it soonest.
> > If you are streaming dropping all should be no great loss.
>
> The quesion is, how do you know you're streaming? Some files are
> read/written many times and some files are accessed randomly. I'm trying
> to avoid penalizing these admittedly rarer, but still important cases.
For those other case we have pages on the active list which are properly
aged, haven't we ?
And readahead won't help you for random access unless you can cache it all.
You might throw in a few extra blocks if you have free memory, but then you
have free memory anyway.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
This archive was generated by hypermail 2b29 : Fri Aug 31 2001 - 21:00:25 EST