Re: Folio discussion recap

From: Dave Chinner
Date: Sun Sep 19 2021 - 21:05:21 EST


On Sat, Sep 18, 2021 at 12:51:50AM -0400, Kent Overstreet wrote:
> On Sat, Sep 18, 2021 at 11:04:40AM +1000, Dave Chinner wrote:
> > As for long term, everything in the page cache API needs to
> > transition to byte offsets and byte counts instead of units of
> > PAGE_SIZE and page->index. That's a more complex transition, but
> > AFAIA that's part of the future work Willy is intended to do with
> > folios and the folio API. Once we get away from accounting and
> > tracking everything as units of struct page, all the public facing
> > APIs that use those units can go away.
>
> Probably 95% of the places we use page->index and page->mapping aren't necessary
> because we've already got that information from the context we're in and
> removing them would be a useful cleanup

*nod*

> - if we've already got that from context
> (e.g. we're looking up the page in the page cache, via i_pageS) eliminating the
> page->index or page->mapping use means we're getting rid of a data dependency so
> it's good for performance - but more importantly, those (much fewer) places in
> the code where we actually _do_ need page->index and page->mapping are really
> important places to be able to find because they're interesting boundaries
> between different components in the VM.

*nod*

This is where infrastructure like like write_cache_pages() is
problematic. It's not actually a component of the VM - it's core
page cache/filesystem API functionality - but the implementation is
determined by the fact there is no clear abstraction between the
page cache and the VM and so while the filesysetm side of the API is
byte-ranged based, the VM side is struct page based and so the
impedence mismatch has to be handled in the page cache
implementation.

Folios are definitely pointing out issues like this whilst, IMO,
demonstrating that an abstraction like folios are also a necessary
first step to address the problems they make obvious...

Cheers,

Dave.
--
Dave Chinner
david@xxxxxxxxxxxxx