Chris Mason wrote:
>
> On Thursday, December 21, 2000 22:38:04 -0200 Marcelo Tosatti <marcelo@conectiva.com.br> wrote:
> >
> > On Thu, 21 Dec 2000, Andreas Dilger wrote:
> >
> >> Marcelo Tosatti writes:
> >> > It seems your code has a problem with bh flush time.
> >> >
> >> > In flush_dirty_buffers(), a buffer may (if being called from kupdate) only
> >> > be written in case its old enough. (bh->b_flushtime)
> >> >
> >> > If the flush happens for an anonymous buffer, you'll end up writing all
> >> > buffers which are sitting on the same page (with block_write_anon_page),
> >> > but these other buffers are not necessarily old enough to be flushed.
> >>
> >> This isn't really a "problem" however. The page is the _maximum_ age of
> >> the buffer before it needs to be written. If we can efficiently write it
> >> out with another buffer
> >
> >> (essentially for free if they are on the same spot on disk)
> >
> > Are you sure this is true for buffer pages in most cases?
>
> It's a good point. block_write_anon_page could be changed to just
> write the oldest buffer and redirty the page (if the buffers are
> far apart). If memory is tight, and we *really* need the page back,
> it will be flushed by try_to_free_buffers.
>
> It seems a bit nasty to me though...writepage should write the page.
Um. Why cater to the uncommon case of 1K blocks? Just let
bdflush/kupdated deal with them in the normal way - it's pretty
efficient. Only try to do the clustering optimization when buffer size
matches memory page size.
-- Daniel - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org Please read the FAQ at http://www.tux.org/lkml/
This archive was generated by hypermail 2b29 : Sat Dec 23 2000 - 21:00:31 EST