Re: [PATCHv3 17/41] filemap: handle huge pages in filemap_fdatawait_range()

From: Jan Kara
Date: Thu Oct 13 2016 - 07:46:21 EST


On Thu 15-09-16 14:54:59, Kirill A. Shutemov wrote:
> We writeback whole huge page a time.

This is one of the things I don't understand. Firstly I didn't see where
changes of writeback like this would happen (maybe they come later).
Secondly I'm not sure why e.g. writeback should behave atomically wrt huge
pages. Is this because radix-tree multiorder entry tracks dirtiness for us
at that granularity? BTW, can you also explain why do we need multiorder
entries? What do they solve for us?

I'm sorry for these basic questions but I'd just like to understand how is
this supposed to work...

Honza


>
> Signed-off-by: Kirill A. Shutemov <kirill.shutemov@xxxxxxxxxxxxxxx>
> ---
> mm/filemap.c | 5 +++++
> 1 file changed, 5 insertions(+)
>
> diff --git a/mm/filemap.c b/mm/filemap.c
> index 05b42d3e5ed8..53da93156e60 100644
> --- a/mm/filemap.c
> +++ b/mm/filemap.c
> @@ -372,9 +372,14 @@ static int __filemap_fdatawait_range(struct address_space *mapping,
> if (page->index > end)
> continue;
>
> + page = compound_head(page);
> wait_on_page_writeback(page);
> if (TestClearPageError(page))
> ret = -EIO;
> + if (PageTransHuge(page)) {
> + index = page->index + HPAGE_PMD_NR;
> + i += index - pvec.pages[i]->index - 1;
> + }
> }
> pagevec_release(&pvec);
> cond_resched();
> --
> 2.9.3
>
>
--
Jan Kara <jack@xxxxxxxx>
SUSE Labs, CR