Re: Random file I/O regressions in 2.6

From: Andrew Morton
Date: Mon May 03 2004 - 13:12:34 EST


Nick Piggin <nickpiggin@xxxxxxxxxxxx> wrote:
>
> What ends up happening is that readahead gets turned off, then the
> 16K read ends up being done in 4 synchronous 4K chunks. Because they
> are synchronous, they have no chance of being merged with one another
> either.

yup.

> I have attached a proof of concept hack... I think what should really
> happen is that page_cache_readahead should be taught about the size
> of the requested read, and ensures that a decent amount of reading is
> done while within the read request window, even if
> beyond-request-window-readahead has been previously unsuccessful.

The "readahead turned itself off" thing is there to avoid doing lots of
pagecache lookups in the very common case where the file is fully cached.

The place which needs attention is handle_ra_miss(). But first I'd like to
reacquaint myself with the intent behind the lazy-readahead patch. Was
never happy with the complexity and special-cases which that introduced.

> cond_resched();
> - page_cache_readahead(mapping, ra, filp, index);
> + page_cache_readahead(mapping, ra, filp, index + desc->count);
>

`index' is a pagecache index and desc->count is a byte counter.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/