Re: Random file I/O regressions in 2.6

From: Ram Pai
Date: Mon May 03 2004 - 15:25:39 EST


On Mon, 2004-05-03 at 11:08, Andrew Morton wrote:
> Nick Piggin <nickpiggin@xxxxxxxxxxxx> wrote:
> >
> > What ends up happening is that readahead gets turned off, then the
> > 16K read ends up being done in 4 synchronous 4K chunks. Because they
> > are synchronous, they have no chance of being merged with one another
> > either.
>
> yup.
>
> > I have attached a proof of concept hack... I think what should really
> > happen is that page_cache_readahead should be taught about the size
> > of the requested read, and ensures that a decent amount of reading is
> > done while within the read request window, even if
> > beyond-request-window-readahead has been previously unsuccessful.
>
> The "readahead turned itself off" thing is there to avoid doing lots of
> pagecache lookups in the very common case where the file is fully cached.
>
> The place which needs attention is handle_ra_miss(). But first I'd like to
> reacquaint myself with the intent behind the lazy-readahead patch. Was
> never happy with the complexity and special-cases which that introduced.

lazy-readahead has no role to play here. The readahead window got closed
because the i/o pattern was totally random. My guess is multiple threads
are generating 16k i/o on the same fd. In such a case the i/os can get
interleaved and the readahead window size goes for a toss(which is
expected behavior)

Well if this is infact the case: the question is
1. does the i/o pattern really has some sequentiality to
deserve a readahead?
2. or should we ensure that the interleaved case be somehow
handled, by including the size parameter?

I know Nick has implied option (2) but I think from the readahead's
point of view it is (1),
RP


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/