Re: [PATCHv3 15/41] filemap: handle huge pages in do_generic_file_read()
From: Kirill A. Shutemov
Date: Mon Nov 07 2016 - 06:32:41 EST
On Wed, Nov 02, 2016 at 07:36:12AM -0700, Christoph Hellwig wrote:
> On Tue, Nov 01, 2016 at 05:39:40PM +0100, Jan Kara wrote:
> > I'd also note that having PMD-sized pages has some obvious disadvantages as
> > well:
> >
> > 1) I'm not sure buffer head handling code will quite scale to 512 or even
> > 2048 buffer_heads on a linked list referenced from a page. It may work but
> > I suspect the performance will suck.
>
> buffer_head handling always sucks. For the iomap based bufferd write
> path I plan to support a buffer_head-less mode for the block size ==
> PAGE_SIZE case in 4.11 latest, but if I get enough other things of my
> plate in time even for 4.10. I think that's the right way to go for
> THP, especially if we require the fs to allocate the whole huge page
> as a single extent, similar to the DAX PMD mapping case.
>
> > 2) PMD-sized pages result in increased space & memory usage.
>
> How so?
>
> > 3) In ext4 we have to estimate how much metadata we may need to modify when
> > allocating blocks underlying a page in the worst case (you don't seem to
> > update this estimate in your patch set). With 2048 blocks underlying a page,
> > each possibly in a different block group, it is a lot of metadata forcing
> > us to reserve a large transaction (not sure if you'll be able to even
> > reserve such large transaction with the default journal size), which again
> > makes things slower.
>
> As said above I think we should only use huge page mappings if there is
> a single underlying extent, same as in DAX to keep the complexity down.
It looks like a huge limitation to me.
> > 4) As you have noted some places like write_begin() still depend on 4k
> > pages which creates a strange mix of places that use subpages and that use
> > head pages.
>
> Just use the iomap bufferd I/O code and all these issues will go away.
Not really.
I'm looking onto iomap_write_actor(): we still calculate 'offset' and
'bytes' based on PAGE_SIZE before we even get the page.
This way we limit outself to PAGE_SIZE per-iteration.
--
Kirill A. Shutemov